{"id":462,"date":"2023-03-16T20:17:36","date_gmt":"2023-03-16T20:17:36","guid":{"rendered":"https:\/\/todaysainews.com\/index.php\/2023\/03\/16\/the-impact-lab-google-ai-blog\/"},"modified":"2025-04-27T07:33:50","modified_gmt":"2025-04-27T07:33:50","slug":"the-impact-lab-google-ai-blog","status":"publish","type":"post","link":"https:\/\/todaysainews.com\/index.php\/2023\/03\/16\/the-impact-lab-google-ai-blog\/","title":{"rendered":"The Impact Lab \u2013 Google AI Blog"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"post-body-8933631861084710416\">\n<span class=\"byline-author\">Posted by Jamila Smith-Loud, Human Rights &amp; Social Impact Research Lead, Google Research, Responsible AI and Human-Centered Technology Team<\/span><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEh0_xt0D2xUP3mwnbE3nDQnzFidTPor0WNxYjtgeP52PqJyESFPpzT816Z814WUFn3Fg6uCWOIfopeQHktNpCxNhZ2cmcxbPBkRqMbHsoO13Jhv41vQeSHBY_Lv9_8yKO5IPhBVrRubnfQdMhjvibx8SJVldllA7SzrxxCYywwAgJLM2En0R2iF9TQ_lg\/s640\/RAI%20Impact%20Lab%2002%20med%20topbottom.gif\" style=\"display: none;\"\/><\/p>\n<p>\nGlobalized technology has the potential to create large-scale societal impact, and having a grounded research approach rooted in existing international human and civil rights standards is a critical component to assuring responsible and ethical AI development and deployment. The Impact Lab team, part of Google\u2019s <a href=\"https:\/\/research.google\/teams\/responsible-ai\/\">Responsible AI Team<\/a>, employs a range of interdisciplinary methodologies to ensure critical and rich analysis of the potential implications of technology development. The team\u2019s mission is to examine socioeconomic and human rights impacts of AI,  publish foundational research, and incubate novel mitigations enabling machine learning (ML) practitioners to advance global equity. We study and develop scalable, rigorous, and evidence-based solutions using data analysis, human rights, and participatory frameworks.\n<\/p>\n<p><a name=\"more\"\/> <\/p>\n<p>\nThe uniqueness of the Impact Lab\u2019s goals is its multidisciplinary approach and the diversity of experience, including both applied and academic research. Our aim is to expand the epistemic lens of Responsible AI to center the voices of historically marginalized communities and to overcome the practice of ungrounded analysis of impacts by offering a research-based approach to understand how differing perspectives and experiences should impact the development of technology.\n<\/p>\n<p><\/p>\n<h2>What we do<\/h2>\n<p>\nIn response to the accelerating complexity of ML and the increased coupling between large-scale ML and people, our team critically examines traditional assumptions of how technology impacts society to deepen our understanding of this interplay. We collaborate with academic scholars in the areas of social science and philosophy of technology and publish foundational research focusing on how ML can be helpful and useful. We also offer research support to some of our organization\u2019s most challenging efforts, including the\u00a0<a href=\"https:\/\/blog.google\/technology\/ai\/ways-ai-is-scaling-helpful\/\">1,000 Languages Initiative<\/a> and ongoing work in the testing and evaluation of\u00a0<a href=\"https:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-language.html\">language and generative models<\/a>. Our work  gives weight to <a href=\"https:\/\/ai.google\/principles\/\">Google&#8217;s AI Principles<\/a>.\n<\/p>\n<p>\nTo that end, we:\n<\/p>\n<ul>\n<li>Conduct foundational and exploratory research towards the goal of creating scalable socio-technical solutions\n<\/li>\n<li>Create datasets and research-based frameworks to evaluate ML systems\n<\/li>\n<li>Define, identify, and assess negative societal impacts of AI\n<\/li>\n<li>Create responsible solutions to data collection used to build large models\n<\/li>\n<li>Develop novel methodologies and approaches that support responsible deployment of ML models and systems to ensure safety, fairness, robustness, and user accountability\n<\/li>\n<li>Translate external community and expert feedback into empirical insights to better understand user needs and impacts\n<\/li>\n<li>Seek equitable collaboration and strive for mutually beneficial partnerships\n<\/li>\n<\/ul>\n<p>\nWe strive not only to reimagine existing frameworks for assessing the adverse impact of AI to answer ambitious research questions, but also to promote the importance of this work.\n<\/p>\n<p><\/p>\n<h2>Current research efforts<\/h2>\n<h3>Understanding social problems<\/h3>\n<p>\nOur motivation for providing rigorous analytical tools and approaches is to ensure that  social-technical impact and fairness is well understood in relation to cultural and historical nuances. This is quite important, as it helps develop the incentive and ability to better understand communities who experience the greatest burden and demonstrates the value of rigorous and focused analysis. Our goals are to proactively partner with external thought leaders in this problem space, reframe our existing mental models when assessing potential harms and impacts, and avoid relying on unfounded assumptions and stereotypes in ML technologies. We collaborate with researchers at Stanford, University of California Berkeley, University of Edinburgh, Mozilla Foundation, University of Michigan, Naval Postgraduate School, Data &amp; Society, EPFL, Australian National University, and McGill University.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjputhufJmqt6XY_hnXEyz2ab6W-h5_UN5kxWWh-etZVpdGHFAvPVB2kW-d2LoRv4OCx_Ap1ka2tHO7pC4X0pZsPdxJvi5L6IF3nu9hY9gbD2h3GTWYSKzzk5j5miBGru3GyumlkurrMdgk-SfzwtnwSN8r834vEu8GXOGt7LuBY63kV-PdhassI_K3gw\/s640\/RAI%20Impact%20Lab%2002%20med.gif\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"320\" data-original-width=\"640\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjputhufJmqt6XY_hnXEyz2ab6W-h5_UN5kxWWh-etZVpdGHFAvPVB2kW-d2LoRv4OCx_Ap1ka2tHO7pC4X0pZsPdxJvi5L6IF3nu9hY9gbD2h3GTWYSKzzk5j5miBGru3GyumlkurrMdgk-SfzwtnwSN8r834vEu8GXOGt7LuBY63kV-PdhassI_K3gw\/s16000\/RAI%20Impact%20Lab%2002%20med.gif\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">We examine systemic social issues and generate useful artifacts for responsible AI development.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><!--\n\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n\n<tbody>\n\n<tr>\n\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEhD4Q57yVYDeQDD2Ix72ZFZom8_W45_MGxFGYCdPiQMgXXfzpe_aeNjnzMn90NkFFIRmlOutRDllcEaJpyvcnebUVC5exN44jhK4tIU1PEKk5yvwDP62-nI7zx2icZmm_prNFNuAiaSHmN_JI9MYeUKFH1vnva4Cv-gRGrssCvms-G8Grn8eazDG90u1g\/s640\/RAI%20Impact%20Lab%2002%20small.gif\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"320\" data-original-width=\"640\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEhD4Q57yVYDeQDD2Ix72ZFZom8_W45_MGxFGYCdPiQMgXXfzpe_aeNjnzMn90NkFFIRmlOutRDllcEaJpyvcnebUVC5exN44jhK4tIU1PEKk5yvwDP62-nI7zx2icZmm_prNFNuAiaSHmN_JI9MYeUKFH1vnva4Cv-gRGrssCvms-G8Grn8eazDG90u1g\/s16000\/RAI%20Impact%20Lab%2002%20small.gif\" \/><\/a><\/td>\n\n<\/tr>\n\n\n\n<tr>\n\n<td class=\"tr-caption\" style=\"text-align: center;\">We examine systemic social issues and generate useful artifacts for responsible AI development.<\/td>\n\n<\/tr>\n\n<\/tbody>\n\n<\/table>\n\n\n\n\n\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n\n<tbody>\n\n<tr>\n\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEg4UtqMI9i6OH17YYkJbqYWXm1zWQye_lrbLZKZ3GIpMKilBzAufX1NnNh_WZDXy7y8oNlvf4bz5JJ0j5Z17PV1rh7CCGvE2UtBKCWqPfAAjxqhDIHKOQfVupkZE-Ri_9cpdkTkxUAcbjD9S8X1eqAKBNTLnIfeBmoyvSzThR3F2iD1XcCP6KWjtxxa_Q\/s400\/image2.gif\" style=\"margin-left: auto; margin-right: auto;\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" data-original-height=\"200\" data-original-width=\"400\" height=\"200\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEg4UtqMI9i6OH17YYkJbqYWXm1zWQye_lrbLZKZ3GIpMKilBzAufX1NnNh_WZDXy7y8oNlvf4bz5JJ0j5Z17PV1rh7CCGvE2UtBKCWqPfAAjxqhDIHKOQfVupkZE-Ri_9cpdkTkxUAcbjD9S8X1eqAKBNTLnIfeBmoyvSzThR3F2iD1XcCP6KWjtxxa_Q\/w400-h200\/image2.gif\" width=\"400\" \/><\/a><\/td>\n\n<\/tr>\n\n\n\n<tr>\n\n<td class=\"tr-caption\" style=\"text-align: center;\">We examine systemic social issues and generate useful artifacts for responsible AI development.<\/td>\n\n<\/tr>\n\n<\/tbody>\n\n<\/table>\n\n--><\/p>\n<h3>Centering underrepresented voices<\/h3>\n<p>\nWe also developed the <a href=\"https:\/\/arxiv.org\/abs\/2303.08177\">Equitable AI Research Roundtable<\/a> (EARR), a novel community-based research coalition created to establish ongoing partnerships with external nonprofit and research organization leaders who are equity experts in the fields of education, law, social justice, AI ethics, and economic development. These partnerships offer the opportunity to engage with multi-disciplinary experts on complex research questions related to how we center and understand equity using lessons from other domains. Our partners include <a href=\"https:\/\/www.policylink.org\/\">PolicyLink<\/a>; <a href=\"https:\/\/west.edtrust.org\/\">The Education Trust &#8211; West<\/a>; <a href=\"https:\/\/notley.com\/\">Notley<\/a>; <a href=\"https:\/\/partnershiponai.org\/\">Partnership on AI<\/a>; <a href=\"https:\/\/belonging.berkeley.edu\/\">Othering and Belonging Institute<\/a> at UC Berkeley;  <a href=\"https:\/\/michelsonip.com\/hbcu-ip-futures-collaborative\/\">The Michelson Institute for Intellectual Property, HBCU IP Futures Collaborative<\/a> at Emory University; <a href=\"https:\/\/citris-uc.org\/\">Center for Information Technology Research in the Interest of Society<\/a> (CITRIS) at the Banatao Institute; and the <a href=\"https:\/\/www.utdanacenter.org\">Charles A. Dana Center<\/a> at the University of Texas, Austin. The goals of the EARR program are to: (1) center knowledge about the experiences of historically marginalized or underrepresented groups, (2) qualitatively understand and identify potential approaches for studying social harms and their analogies within the context of technology, and (3) expand the lens of expertise and relevant knowledge as it relates to our work on responsible and safe approaches to AI development.\n<\/p>\n<p>\nThrough semi-structured workshops and discussions, EARR has provided critical perspectives and feedback on how to conceptualize equity and vulnerability as they relate to AI technology. We have partnered with EARR contributors on a range of topics from generative AI, algorithmic decision making, transparency, and explainability, with outputs ranging from adversarial queries to frameworks and case studies. Certainly the process of translating research insights across disciplines into technical solutions is not always easy but this research has been a rewarding partnership. We present our initial evaluation of this engagement in <a href=\"https:\/\/arxiv.org\/abs\/2303.08177\">this paper<\/a>.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgZSWDNV6X98gwnjLAMyj5sTe_c-MHf28yTAiWFp-45NQPDUjXpVGmppJgvXJ-zeBSuxngTguO6PNC88NwTpoQA3vGWI0J-pFJGYMB9CTJwTvbKCBIEMII6EMI6U79Nky2LE7hVEbaXVX0JFZg8Vwop8Bi0WNMMKnPJMS3Ce9P1z0BqLOoPkLQ5aHD0aw\/s1370\/image1.png\" style=\"margin-left: auto; margin-right: auto;\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" data-original-height=\"976\" data-original-width=\"1370\" height=\"456\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEgZSWDNV6X98gwnjLAMyj5sTe_c-MHf28yTAiWFp-45NQPDUjXpVGmppJgvXJ-zeBSuxngTguO6PNC88NwTpoQA3vGWI0J-pFJGYMB9CTJwTvbKCBIEMII6EMI6U79Nky2LE7hVEbaXVX0JFZg8Vwop8Bi0WNMMKnPJMS3Ce9P1z0BqLOoPkLQ5aHD0aw\/w640-h456\/image1.png\" width=\"640\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">EARR: Components of the ML development life cycle in which multidisciplinary knowledge is key for mitigating human biases.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Grounding in civil and human rights values<\/h3>\n<p>In partnership with our <a href=\"https:\/\/about.google\/human-rights\/\">Civil and Human Rights Program<\/a>, our research and analysis process is grounded in internationally recognized human rights frameworks and standards including the <a href=\"https:\/\/www.un.org\/en\/about-us\/universal-declaration-of-human-rights\">Universal Declaration of Human Rights<\/a> and the <a href=\"https:\/\/www.ohchr.org\/sites\/default\/files\/documents\/publications\/guidingprinciplesbusinesshr_en.pdf\">UN Guiding Principles on Business and Human Rights<\/a>. Utilizing civil and human rights frameworks as a starting point allows for a context-specific approach to research\u00a0 that takes into account how a technology will be deployed and its community impacts. Most importantly, a rights-based approach to research enables us to prioritize conceptual and applied methods that emphasize the importance of understanding the most vulnerable users and the most salient harms to better inform day-to-day decision making, product design and long-term strategies.<\/p>\n<p><\/p>\n<h2>Ongoing work <\/h2>\n<h3>Social context to aid in dataset development and evaluation<\/h3>\n<p>\nWe seek to employ an approach to dataset curation, model development and evaluation that is rooted in equity and that avoids expeditious but potentially risky approaches, such as utilizing incomplete data or not considering the historical and social cultural factors related to a  dataset. Responsible data collection and analysis requires an <a href=\"https:\/\/arxiv.org\/abs\/2010.13561\">additional level<\/a> of <a href=\"https:\/\/journals.sagepub.com\/doi\/epub\/10.1177\/20539517211035955\">careful consideration of the context<\/a> in which the data are created. For example, one may see differences in outcomes across demographic variables that will be used to build models and should question the structural and system-level factors at play as some variables could ultimately be a <a href=\"https:\/\/arxiv.org\/abs\/1912.03593\">reflection of historical, social and political factors<\/a>. By using proxy data, such as race or ethnicity, gender, or zip code, <a href=\"https:\/\/weallcount.com\/2020\/06\/26\/proxy-variables-part-2-race\/\">we are systematically merging together the lived experiences of an entire group of diverse people<\/a> and using it to train models that can recreate and maintain harmful and <a href=\"https:\/\/arxiv.org\/abs\/2102.05085\">inaccurate character profiles of entire populations<\/a>. Critical data analysis also requires a careful understanding that correlations or relationships between variables do not imply causation; the <em>association<\/em> we witness is often <em>caused<\/em> by additional multiple variables.\n<\/p>\n<h3>Relationship between social context and model outcomes<\/h3>\n<p>\nBuilding on this expanded and nuanced social understanding of data and dataset construction, we also approach the problem of <a href=\"https:\/\/arxiv.org\/pdf\/2001.00973.pdf\">anticipating or ameliorating the impact of ML models<\/a> once they have been <a href=\"https:\/\/arxiv.org\/pdf\/2210.03535.pdf\">deployed for use in the real world<\/a>. There are myriad ways in which the use of ML in various contexts \u2014 from education to health care \u2014 has exacerbated existing inequity because the developers and decision-making users of these systems lacked the relevant social understanding, historical context, and did not involve relevant stakeholders. This is a research challenge for the field of ML in general and one that is central to our team.\n<\/p>\n<h3>Globally responsible AI centering community experts<\/h3>\n<p>\nOur team also recognizes the saliency of understanding the  socio-technical context globally. In line with Google\u2019s mission to \u201corganize the world\u2019s information and make it universally accessible and useful\u201d, our team is engaging in research partnerships globally. For example, we are collaborating with <a href=\"https:\/\/www.mak.ac.ug\/\">The Natural Language Processing team and the Human Centered team in the Makerere Artificial Intelligence Lab<\/a> in Uganda to research cultural and language nuances as they relate to language model development.\n<\/p>\n<p><\/p>\n<h2>Conclusion<\/h2>\n<p>\nWe continue to address the impacts of ML models deployed in the real world by conducting further socio-technical research and engaging external experts who are also part of the communities that are historically and globally disenfranchised. The Impact Lab is excited to offer an approach that contributes to the development of solutions for applied problems through the utilization of social-science, evaluation, and human rights epistemologies.\n<\/p>\n<p><\/p>\n<h2>Acknowledgements<\/h2>\n<p>\n<em>We would like to thank each member of the Impact Lab team \u2014 Jamila Smith-Loud, Andrew Smart, Jalon Hall, Darlene Neal, Amber Ebinama, and Qazi Mamunur Rashid\u00a0<\/em><em>\u2014 for all the hard work they do to ensure that ML is more responsible to its users and society across communities and around the world.<\/em>\n<\/p>\n<\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"http:\/\/ai.googleblog.com\/2023\/03\/responsible-ai-at-google-research.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Posted by Jamila Smith-Loud, Human Rights &amp; Social Impact Research Lead, Google Research, Responsible AI and Human-Centered<\/p>\n","protected":false},"author":2,"featured_media":463,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-462","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google-ai"],"_links":{"self":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/462","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/comments?post=462"}],"version-history":[{"count":1,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/462\/revisions"}],"predecessor-version":[{"id":2841,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/462\/revisions\/2841"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media\/463"}],"wp:attachment":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media?parent=462"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/categories?post=462"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/tags?post=462"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}