{"id":98,"date":"2023-01-24T22:10:28","date_gmt":"2023-01-24T22:10:28","guid":{"rendered":"https:\/\/todaysainews.com\/index.php\/2023\/01\/24\/responsible-ai-google-ai-blog\/"},"modified":"2025-04-27T07:36:13","modified_gmt":"2025-04-27T07:36:13","slug":"responsible-ai-google-ai-blog","status":"publish","type":"post","link":"https:\/\/todaysainews.com\/index.php\/2023\/01\/24\/responsible-ai-google-ai-blog\/","title":{"rendered":"Responsible AI \u2013 Google AI Blog"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"post-body-7224136058186317559\">\n<span class=\"byline-author\">Posted by Marian Croak, VP, Google Research, Responsible AI and Human-Centered Technology<\/span><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEipcYsV0wiSEmbfG2j8EMAujMsfK0_Y64m34AZEBKrAnrzFc9y2ILbSCNJi5bvlBJ06m4wrHc4xNTmE8Dc6Cbr0qshNTEDypl_t7Klrn9hLkPR7b0FptZYa01PAlKI2R9XljXhvF6smUGMsTb-zddR_Aedp-JCmUTC0iqllfLBWd0ObttsHX0JjI3JwnA\/s912\/image6.png\" style=\"display: none;\"\/><a name=\"more\"\/> <\/p>\n<p><!--\n\n<div style=\"margin-left: 10%; margin-right: 10%; text-align: center;\">\n  <span style=\"font-size: small; text-align: center;\"><i>This is the second post in our &#8220;Google Research, 2022 &amp; Beyond&#8221; series. Other topics in the series can be found below: <\/i><\/span>\n  \n  \n\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n      \n\n<colgroup>\n     \n\n<col style=\"width: 22%;\"><\/col>\n\n\n     \n\n<col style=\"width: 22%;\"><\/col>\n\n\n     \n\n<col style=\"width: 22%;\"><\/col>\n\n\n  <\/colgroup>\n\n\n\n\n<tbody>\n  \n\n<tr>\n\n<td style=\"font-size: small;\"><a href=\"https:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-language.html#LanguageModels\">Language Models<\/a>\n   <\/td>\n\n\n\n<td style=\"font-size: small;\"><a href=\"https:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-language.html#ComputerVision\">Computer Vision<\/a><\/td>\n\n\n    \n\n<td style=\"font-size: small;\"><a href=\"https:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-language.html#MultimodalModels\">Multimodal Models<\/a>\n   <\/td>\n\n<\/tr>\n\n\n\n<tr>\n\n<td style=\"font-size: small;\"><a href=\"https:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-language.html#GenerativeModels\">Generative Models<\/a>\n   <\/td>\n\n\n\n<td style=\"font-size: small;\"><a href=\"https:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-responsible.html\n     \">Responsible AI<\/a>\n   <\/td>\n\n\n\n<td style=\"font-size: small;\">Algorithms*\n   <\/td>\n\n<\/tr>\n\n\n\n<tr>\n\n<td style=\"font-size: small;\">ML &amp; Computer Systems\n   <\/td>\n\n\n\n<td style=\"font-size: small;\">Robotics\n   <\/td>\n\n\n\n<td style=\"font-size: small;\">Health\n   <\/td>\n\n<\/tr>\n\n\n\n<tr>\n\n<td style=\"font-size: small;\">General Science &amp; Quantum\n   <\/td>\n\n\n\n<td style=\"font-size: small;\">Community Engagement\n   <\/td>\n\n\n\n<td><\/td>\n\n<\/tr>\n\n<\/tbody>\n\n<\/table>\n\n\n\n\n<div style=\"line-height: 60%;\">\n    <\/div>\n\n\n\n\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n\n<tbody>\n  \n\n<tr>\n\n<td class=\"tr-caption\" style=\"font-size: small; text-align: center;\">* Other articles in the series will be linked as they are released.<\/td>\n\n<\/tr>\n\n<\/tbody>\n\n<\/table>\n\n\n<\/div>\n\n \n\n<br \/>--> <\/p>\n<p>\nThe last year showed tremendous breakthroughs in artificial intelligence (AI), particularly in large language models (LLMs) and text-to-image models. These technological advances require that we are thoughtful and intentional in how they are developed and deployed. In this blogpost, we share ways we have approached <a href=\"https:\/\/research.google\/research-areas\/responsible-ai\/\">Responsible AI<\/a> across our research in the past year and where we\u2019re headed in 2023. We highlight four primary themes covering foundational and socio-technical research, applied research, and product solutions, as part of our commitment to build AI products in a responsible and ethical manner, in alignment with our <a href=\"https:\/\/ai.google\/principles\/\">AI Principles<\/a>.\n<\/p>\n<p><a name=\"top\"> <\/a><br \/>\n<br \/>\n<a name=\"Theme1\"> <\/a><\/p>\n<h2>Theme 1: Responsible AI Research Advancements<\/h2>\n<h3>Machine Learning Research<\/h3>\n<p>\nWhen machine learning (ML) systems are used in real world contexts, they can fail to behave in expected ways, which reduces their realized benefit. Our research identifies situations in which unexpected behavior may arise,  so that we can mitigate undesired outcomes.\n<\/p>\n<p>\nAcross several types of ML applications, we showed that models are often <a href=\"https:\/\/www.jmlr.org\/papers\/v23\/20-1335.html\">underspecified<\/a>, which means they perform well in exactly the situation in which they are trained, but may not be robust or <a href=\"https:\/\/arxiv.org\/pdf\/2202.01034.pdf\">fair in new situations<\/a>, because the models rely on \u201cspurious correlations\u201d \u2014 specific side effects that are not generalizable. This poses a risk to ML system developers, and demands new model evaluation practices.\n<\/p>\n<p>\nWe <a href=\"https:\/\/arxiv.org\/abs\/2205.05256\">surveyed evaluation practices<\/a> currently used by ML researchers and introduced improved evaluation standards in <a href=\"https:\/\/arxiv.org\/abs\/2212.11254\">work addressing common ML pitfalls<\/a>. We identified and demonstrated techniques to mitigate causal \u201c<a href=\"https:\/\/arxiv.org\/abs\/2207.10384\">shortcuts<\/a>\u201d, which lead to a lack of ML system robustness and dependency on sensitive attributes, such as age or gender.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjINOFOab6sMuxIV2OQj4d_8ga6zK_Dz65_TmdbAGQItVtOuOLHE2uYHcuNh6g69p3pGaqLkoRU18xmhlap8jO5lBK2NOJs-agz1gUQB2PjhPY5VQ4eS9CANned_vilZDG0NVCw4q-LSOKlWNH8hcapMhaSJUE0tOPQaKDXatcJtgB6ULk0zo0JmmgbTQ\/s1182\/Screenshot%202023-01-24%2011.05.57%20AM.png\" style=\"margin-left: auto; margin-right: auto;\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" data-original-height=\"1182\" data-original-width=\"1068\" height=\"640\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjINOFOab6sMuxIV2OQj4d_8ga6zK_Dz65_TmdbAGQItVtOuOLHE2uYHcuNh6g69p3pGaqLkoRU18xmhlap8jO5lBK2NOJs-agz1gUQB2PjhPY5VQ4eS9CANned_vilZDG0NVCw4q-LSOKlWNH8hcapMhaSJUE0tOPQaKDXatcJtgB6ULk0zo0JmmgbTQ\/w578-h640\/Screenshot%202023-01-24%2011.05.57%20AM.png\" width=\"578\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">Shortcut learning: Age impacts correct medical diagnosis.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\nTo better understand the causes of and mitigations for robustness issues, we decided to dig deeper into model design in specific domains. In computer vision, we studied the <a href=\"https:\/\/arxiv.org\/abs\/2111.10659\">robustness of new vision transformer models<\/a> and developed new negative data augmentation techniques to <a href=\"https:\/\/arxiv.org\/abs\/2110.07858\">improve their robustness<\/a>. For natural language tasks, we similarly investigated how <a href=\"https:\/\/arxiv.org\/pdf\/2211.06348.pdf\">different data distributions improve generalization across different groups<\/a> and how <a href=\"https:\/\/arxiv.org\/pdf\/2210.16298.pdf\">ensembles<\/a> and <a href=\"https:\/\/arxiv.org\/abs\/2207.07411\">pre-trained models<\/a> can help.\n<\/p>\n<p>\nAnother key part of our ML work involves developing techniques to build models that <a href=\"https:\/\/arxiv.org\/abs\/2205.15860\">are more inclusive<\/a>. For example, we <a href=\"https:\/\/arxiv.org\/abs\/2210.03535\">look to external communities to guide<\/a> understanding of when and why our evaluations fall short using <a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=WOAlvmoAAAAJ&amp;sortby=pubdate&amp;citation_for_view=WOAlvmoAAAAJ:t7zJ5fGR-2UC\">participatory systems<\/a>, which explicitly enable joint ownership of predictions and allow people to choose whether to disclose on sensitive topics.\n<\/p>\n<h3>Sociotechnical Research<\/h3>\n<p>\nIn our quest to include a diverse range of cultural contexts and voices in AI development and evaluation, we have strengthened <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3491102.3517716\">community-based research<\/a> efforts, focusing on particular communities who are less represented or may experience unfair outcomes of AI. We specifically looked at evaluations of unfair gender bias, both in <a href=\"https:\/\/blog.google\/technology\/ai\/reducing-gender-based-harms-in-ai-with-sunipa-dev\/\">natural language<\/a> and in contexts such as <a href=\"https:\/\/research.google\/pubs\/pub52060\/\">gender-inclusive health<\/a>. This work is advancing more accurate evaluations of unfair gender bias so that our technologies evaluate and mitigate harms for people with <a href=\"https:\/\/facctconference.org\/2022\/acceptedcraft.html#colab\">queer and non-binary identities.<\/a>\n<\/p>\n<p>\nAlongside our <a href=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3523227.3551476\">fairness advancements<\/a>, we also reached key milestones in our larger efforts to develop <a href=\"https:\/\/ai-cultures.github.io\/\">culturally-inclusive AI<\/a>.  We championed the importance of <a href=\"https:\/\/arxiv.org\/pdf\/2211.13069.pdf\">cross-cultural considerations in AI<\/a> \u2014 in particular, cultural differences in <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3491102.3517533\">user attitudes towards AI<\/a> and <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3531146.3533237\">mechanisms for accountability<\/a> \u2014 and built <a href=\"https:\/\/arxiv.org\/pdf\/2209.12226.pdf\">data and techniques that enable culturally-situated evaluations<\/a>, with a focus on the global south. We also described user <a href=\"https:\/\/aclanthology.org\/2022.findings-naacl.17\/\">experiences of machine translation<\/a>, in a variety of contexts, and suggested human-centered opportunities for their improvement.\n<\/p>\n<h3>Human-Centered Research <\/h3>\n<p>\nAt Google, we focus on advancing human-centered research and design. Recently, our work showed how LLMs can be used to <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3491101.3503564\">rapidly prototype<\/a> new <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3491102.3517582\">AI-based interactions<\/a>. We also published five new interactive explorable visualizations that introduce key ideas and guidance to the research community, including how to use <a href=\"https:\/\/pair.withgoogle.com\/explorables\/saliency\/\">saliency to detect unintended biases in ML models<\/a>, and how <a href=\"http:\/\/ai.googleblog.com\/2017\/04\/federated-learning-collaborative.html\">federated learning<\/a> can be used to collaboratively <a href=\"https:\/\/pair.withgoogle.com\/explorables\/federated-learning\/\">train a model with data from multiple users<\/a> without any raw data leaving their devices.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEiDPl7Evm_91RUEmXOz0mRNxp6TZWft89zQL1UZ2l_8F7SYAqkHgQGCpgy_-7qU5QCIf8pVV431oIzjf1uCDv6h6RhWHA3NP0wHAiqXESBhfPBDZSDHuJWQzSMfcx3lRL_I1RanOeKP1tuc-1uZaysk4mK2U1XY3ICXufnuL_VBCpJOoWfDr0wIg30gnQ\/s905\/image2.gif\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"594\" data-original-width=\"905\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEiDPl7Evm_91RUEmXOz0mRNxp6TZWft89zQL1UZ2l_8F7SYAqkHgQGCpgy_-7qU5QCIf8pVV431oIzjf1uCDv6h6RhWHA3NP0wHAiqXESBhfPBDZSDHuJWQzSMfcx3lRL_I1RanOeKP1tuc-1uZaysk4mK2U1XY3ICXufnuL_VBCpJOoWfDr0wIg30gnQ\/s16000\/image2.gif\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\"\/><\/tr>\n<\/tbody>\n<\/table>\n<p>\n  Our interpretability research explored how we can trace the behavior of language models <a href=\"https:\/\/arxiv.org\/abs\/2205.11482\">back to the training data itself<\/a>, suggested new ways to compare <a href=\"https:\/\/arxiv.org\/abs\/2201.11196\">differences in what models pay attention to<\/a>, how we can <a href=\"https:\/\/arxiv.org\/abs\/2206.09046\">explain emergent  behavior<\/a>, and how to <a href=\"https:\/\/openreview.net\/forum?id=zt5JpGQ8WhH\">identify human-understandable concepts learned by models<\/a>. We also proposed <a href=\"https:\/\/arxiv.org\/abs\/2205.09403\">a new approach for recommender systems<\/a> that uses natural language explanations to make it easier for people to understand and control their recommendations.\n<\/p>\n<h3>Creativity and AI Research<\/h3>\n<p>\nWe <a href=\"https:\/\/arxiv.org\/abs\/2205.13683\">initiated conversations<\/a> with creative teams on the rapidly changing relationship between AI technology and creativity. In the creative writing space, Google\u2019s <a href=\"https:\/\/pair.withgoogle.com\/\">PAIR<\/a> and <a href=\"https:\/\/magenta.tensorflow.org\/blog\/\">Magenta<\/a> teams developed a novel  prototype for creative writing, and facilitated a <a href=\"https:\/\/wordcraft-writers-workshop.appspot.com\/\">writers&#8217; workshop<\/a> to explore the potential and limits of AI to assist creative writing. The stories from a diverse set of creative writers were <a href=\"https:\/\/wordcraft-writers-workshop.appspot.com\/stories\">published as a collection<\/a>, along with <a href=\"https:\/\/arxiv.org\/abs\/2211.05030\">workshop insights<\/a>. In the fashion space, we explored the relationship between <a href=\"https:\/\/arxiv.org\/pdf\/2203.00435.pdf\">fashion design and cultural representation<\/a>, and in the music space, we started examining the <a href=\"https:\/\/scholar.google.it\/citations?view_op=view_citation&amp;hl=en&amp;user=t5ak3j0AAAAJ&amp;sortby=pubdate&amp;citation_for_view=t5ak3j0AAAAJ:Fu2w8maKXqMC\">risks and opportunities of AI tools for music<\/a>.\n<\/p>\n<p align=\"right\"><a href=\"#top\">Top<\/a><\/p>\n<p><a name=\"Theme2\"> <\/a><\/p>\n<h2>Theme 2: Responsible AI Research in Products<\/h2>\n<p>\nThe ability to see yourself reflected in the world around you is important, yet image-based technologies often lack equitable representation, leaving people of color feeling overlooked and misrepresented. In addition to efforts to improve representation of diverse skin tones across Google products, we introduced a new skin tone scale designed to be more inclusive of the range of skin tones worldwide. Partnering with Harvard professor and sociologist, <a href=\"https:\/\/www.ellismonk.com\/\">Dr. Ellis Monk<\/a>, we released the <a href=\"https:\/\/skintone.google\/get-started\">Monk Skin Tone (MST) Scale<\/a>, a 10-shade scale that is <a href=\"https:\/\/skintone.google\/get-started\">available<\/a> for the research community and industry professionals for research and product development.  Further, this scale is being incorporated into features on our products, continuing a long line of our work to improve diversity and skin tone representation on Image Search and filters in Google Photos.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEg5Vxb_PAlCXqitkiK6MwSa2lVzu7KwqO2lCApsuU_RlKFv74gHlusRJfNGeTV92_onox-opQabMPCXTxXYbXjeISSBWlp0tsB5mLlbEy-E4GkvYdceUoJSvpAiTZloxlaOITlnePAmOFJcuRHjlJu9Uh2XRi4bVV1W7_5riOV61DUNIIeGYV1L5_bgLA\/s1000\/MonkScale.png\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"103\" data-original-width=\"1000\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEg5Vxb_PAlCXqitkiK6MwSa2lVzu7KwqO2lCApsuU_RlKFv74gHlusRJfNGeTV92_onox-opQabMPCXTxXYbXjeISSBWlp0tsB5mLlbEy-E4GkvYdceUoJSvpAiTZloxlaOITlnePAmOFJcuRHjlJu9Uh2XRi4bVV1W7_5riOV61DUNIIeGYV1L5_bgLA\/s16000\/MonkScale.png\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">The 10 shades of the Monk Skin Tone Scale.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\nThis is one of many examples of how Responsible AI in Research works closely with products across the company to inform research and develop new techniques. In another example, we leveraged our past research on counterfactual data augmentation in natural language to <a href=\"https:\/\/blog.google\/products\/search\/using-ai-keep-google-search-safe\/\">improve SafeSearch<\/a>, reducing unexpected shocking Search results by 30%, especially on searches related to ethnicity, sexual orientation, and gender. To improve video content moderation, we developed <a href=\"https:\/\/arxiv.org\/pdf\/2210.09500.pdf\">new approaches for helping human raters focus their attention<\/a> on segments of long videos that are more likely to contain policy violations. And, we\u2019ve continued our research on developing more precise ways of <a href=\"https:\/\/arxiv.org\/pdf\/2210.07755.pdf\">evaluating equal treatment in recommender systems<\/a>, accounting for the broad diversity of users and use cases.\n<\/p>\n<p>\nIn the area of large models, we incorporated Responsible AI best practices as part of the development process, creating <a href=\"https:\/\/modelcards.withgoogle.com\/model-reports\">Model Cards<\/a> and <a href=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3531146.3533231\">Data Cards<\/a> (more details below), Responsible AI benchmarks, and societal impact analysis for models such as <a href=\"https:\/\/arxiv.org\/pdf\/2112.06905.pdf\">GLaM,<\/a> <a href=\"https:\/\/arxiv.org\/abs\/2204.02311\">PaLM,<\/a> <a href=\"https:\/\/imagen.research.google\/\">Imagen<\/a>, and <a href=\"https:\/\/parti.research.google\/\">Parti<\/a>. We also showed that <a href=\"https:\/\/arxiv.org\/pdf\/2210.11416.pdf\">instruction fine-tuning<\/a> results in many improvements for Responsible AI benchmarks. Because generative models are often trained and evaluated on human-annotated data, we focused on human-centric considerations like <a href=\"https:\/\/arxiv.org\/pdf\/2110.05719.pdf\">rater disagreement<\/a> and <a href=\"https:\/\/arxiv.org\/abs\/2301.09406\">rater diversity<\/a>. We also presented new capabilities using large models for improving responsibility in other systems. For example, we have explored how language models can <a href=\"https:\/\/aclanthology.org\/2022.woah-1.20.pdf\">generate more complex counterfactuals for counterfactual fairness probing<\/a>. We will continue to focus on these areas in 2023, also understanding the implications for downstream applications.\n<\/p>\n<p align=\"right\"><a href=\"#top\">Top<\/a><\/p>\n<p><a name=\"Theme3\"> <\/a><\/p>\n<h2>Theme 3: Tooling and Techniques <\/h2>\n<h3>Responsible Data<\/h3>\n<p>\n<strong>Data Documentation: <\/strong>\n<\/p>\n<p>\nExtending our earlier work on <a href=\"https:\/\/modelcards.withgoogle.com\/model-reports\">Model Cards <\/a>and the <a href=\"https:\/\/www.tensorflow.org\/responsible_ai\/model_card_toolkit\/guide\">Model Card Toolkit<\/a>, we released <a href=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3531146.3533231\">Data Cards<\/a> and the <a href=\"https:\/\/sites.research.google\/datacardsplaybook\/\">Data Cards Playbook<\/a>, providing developers with methods and tools to document appropriate uses and essential facts related to a model or dataset. We have also advanced research on best practices for data documentation, such as accounting for a dataset\u2019s origins, <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3531146.3534647\">annotation processes<\/a>, intended use cases, ethical considerations, and evolution.  We also applied this to healthcare, creating \u201c<a href=\"https:\/\/arxiv.org\/abs\/2202.13028\">healthsheets<\/a>\u201d to <a href=\"https:\/\/www.nature.com\/articles\/s41591-022-01987-w\">underlie the foundation<\/a> of our international <a href=\"https:\/\/www.datadiversity.org\/people\/project-team\">Standing Together <\/a>collaboration, bringing together patients, health professionals, and policy-makers to develop standards that ensure datasets are diverse and inclusive and to democratize AI.\n<\/p>\n<p>\n<strong>New Datasets:<\/strong>\n<\/p>\n<p>\n<span style=\"text-decoration: underline;\">Fairness:<\/span> We <a href=\"https:\/\/developers.google.com\/codelabs\/product-fairness-testing#0\">released a new dataset<\/a> to assist in ML fairness and adversarial testing tasks, primarily for generative text datasets. The dataset contains 590 words and phrases that show interactions between adjectives, words, and phrases that have been shown to have stereotypical associations with specific individuals and groups based on their sensitive or protected characteristics.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEj-2QNurjdf9Ij5PW7SPFWvV0DwGAq5d0HF3s9R2gN-NVn3HXneCGgejyc7boIksGwab8fEZ3uj4p9G-otFo84qbQg7Q6zOHyxVUL8CdKxScrCsqYSHiQbyM8q3EfNukSkfvHIWsGE8I4Vs41M3zK21zlE_dJlT_SuRlPyzvE6xwHEadBWCGmB5svO6EQ\/s688\/image7.png\" style=\"margin-left: auto; margin-right: auto;\"><img loading=\"lazy\" decoding=\"async\" border=\"0\" data-original-height=\"506\" data-original-width=\"688\" height=\"470\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEj-2QNurjdf9Ij5PW7SPFWvV0DwGAq5d0HF3s9R2gN-NVn3HXneCGgejyc7boIksGwab8fEZ3uj4p9G-otFo84qbQg7Q6zOHyxVUL8CdKxScrCsqYSHiQbyM8q3EfNukSkfvHIWsGE8I4Vs41M3zK21zlE_dJlT_SuRlPyzvE6xwHEadBWCGmB5svO6EQ\/w640-h470\/image7.png\" width=\"640\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">A partial list of the sensitive characteristics in the dataset denoting their associations with adjectives and stereotypical associations.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\n<span style=\"text-decoration: underline;\">Toxicity:<\/span> We constructed and publicly released <a href=\"https:\/\/firstmonday.org\/ojs\/index.php\/fm\/article\/view\/12285\">a dataset of 10,000 posts<\/a> to help identify  when a comment&#8217;s toxicity depends on the comment it&#8217;s replying to. This improves the quality of moderation-assistance models and supports the research community working on better ways to remedy online toxicity.\n<\/p>\n<p><span style=\"text-decoration: underline;\">Societal Context Data:<\/span> We used our <a href=\"https:\/\/medium.com\/jigsaw\/scaling-machine-learning-fairness-with-societal-context-be73d4ad38e2\">experimental societal context repository<\/a> (SCR) to supply the <a href=\"https:\/\/sites.google.com\/corp\/google.com\/conversationai\/team\">Perspective<\/a> team with auxiliary identity and connotation context data for terms relating to categories such as ethnicity, religion, age, gender, or sexual orientation \u2014 in multiple languages. This <a href=\"https:\/\/medium.com\/jigsaw\/scaling-machine-learning-fairness-with-societal-context-be73d4ad38e2\">auxiliary societal context data<\/a> can help augment and balance datasets to significantly reduce unintended biases, and was applied to the widely used  <a href=\"https:\/\/sites.google.com\/corp\/google.com\/conversationai\/perspective\">Perspective API<\/a> toxicity models.<\/p>\n<h3>Learning Interpretability Tool (LIT)<\/h3>\n<p>\n  An important part of developing safer models is having the tools to help debug and understand them. To support this, we released a major update to the <a href=\"https:\/\/pair-code.github.io\/lit\/\">Learning Interpretability Tool<\/a> (LIT), an open-source platform for visualization and understanding of ML models, which now supports images and tabular data. The tool has been widely used in Google to debug models, review model releases, identify fairness issues, and clean up datasets. It also now lets you visualize 10x more data than before, supporting up to 100s of thousands of data points at once.<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEizoZmxdOY5CajsFZJDMyL7btvgp4bg160A29qc_cM3gvvpIrb9z_LDujquEN9aP3Hwer0apPfaphZGdKkPojvdv7fUsq-UO1d38SNObvsEUne6SDSo2TbHO_GKAkjubOLLgTQMEWmOOlqwmjUK8bq-myNlpmQFOf5Ikv_n2ZDI6zNoGduuTQ-AkeXctw\/s1061\/image1.png\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"391\" data-original-width=\"1061\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEizoZmxdOY5CajsFZJDMyL7btvgp4bg160A29qc_cM3gvvpIrb9z_LDujquEN9aP3Hwer0apPfaphZGdKkPojvdv7fUsq-UO1d38SNObvsEUne6SDSo2TbHO_GKAkjubOLLgTQMEWmOOlqwmjUK8bq-myNlpmQFOf5Ikv_n2ZDI6zNoGduuTQ-AkeXctw\/s16000\/image1.png\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">A screenshot of the Language Interpretability Tool displaying generated sentences on a data table.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Counterfactual Logit Pairing<\/h3>\n<p>\nML models are sometimes susceptible to flipping their prediction when a sensitive attribute referenced in an input is either removed or replaced. For example, in a toxicity classifier, examples such as &#8220;I am a man&#8221; and &#8220;I am a lesbian&#8221; may incorrectly produce different outputs. To enable users in the Open Source community to address unintended bias in their ML models, we launched a new library, <a href=\"https:\/\/www.tensorflow.org\/responsible_ai\/model_remediation\/counterfactual\/guide\/counterfactual_overview\">Counterfactual Logit Pairing<\/a> (CLP), which improves a model\u2019s robustness to such perturbations, and can positively influence a model\u2019s stability, fairness, and safety.\n<\/p>\n<p align=\"right\"><a href=\"#top\">Top<\/a><\/p>\n<p><a name=\"Theme4\"> <\/a><\/p>\n<h2> Theme 4: Demonstrating AI\u2019s Societal Benefit<\/h2>\n<p>\nWe believe that AI can be used to explore and address hard, unanswered questions around humanitarian and environmental issues. Our research and engineering efforts span many areas, including accessibility, health, and media representation, with the end goal of promoting inclusion and meaningfully improving people\u2019s lives.\n<\/p>\n<h3>Accessibility<\/h3>\n<p>\nFollowing many years of <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9747516\">research<\/a>, we launched <a href=\"https:\/\/sites.research.google\/relate\/\">Project Relate<\/a>, an <a href=\"https:\/\/play.google.com\/store\/apps\/details?id=com.google.research.projectrelate\">Android app<\/a> that uses a personalized AI-based speech recognition model to enable people with non-standard speech to communicate more easily with others. The app is available to English speakers 18+ in Australia, Canada, Ghana, India, New Zealand, the UK, and the US.\n<\/p>\n<p>\nTo help catalyze advances in AI to benefit people with disabilities, we also launched the <a href=\"https:\/\/blog.google\/outreach-initiatives\/accessibility\/speech-accessibility-project\/\">Speech Accessibility Project<\/a>. This project represents the culmination of a collaborative, multi-year effort between researchers at Google, Amazon, Apple, Meta, Microsoft, and the University of Illinois Urbana-Champaign. Together, this group built a large dataset of impaired speech that is <a href=\"https:\/\/forms.illinois.edu\/sec\/47708963\">available<\/a> to developers to empower research and product development for accessibility applications. This work also complements our efforts to <a href=\"https:\/\/research.google\/pubs\/pub52025\/\">assist people with severe motor and speech impairments<\/a> through improvements to techniques that make use of a user\u2019s eye gaze.\n<\/p>\n<h3>Health<\/h3>\n<p>\nWe\u2019re also focused on building technology to better the lives of people affected by chronic health conditions, while <a href=\"https:\/\/academyhealth.confex.com\/academyhealth\/2022hdpnhpc\/meetingapp.cgi\/Session\/30399\">addressing systemic inequities<\/a>, and allowing for transparent data collection. As consumer technologies \u2014 such as fitness trackers and mobile phones \u2014 become central in data collection for health, we\u2019ve explored use of technology to <a href=\"https:\/\/arxiv.org\/abs\/2207.02941\">improve interpretability of clinical risk scores<\/a> and to <a href=\"https:\/\/arxiv.org\/abs\/2204.03969\">better predict disability scores in chronic diseases<\/a>, leading to earlier treatment and care.  And, we advocated for the <a href=\"https:\/\/www.nature.com\/articles\/s42256-022-00559-4\">importance of infrastructure and engineering<\/a> in this space.\n<\/p>\n<p>\nMany health applications use algorithms that are designed to calculate biometrics and benchmarks, and generate recommendations based on variables that include sex at birth, but might not account for users\u2019 current gender identity. To address this issue, we completed a <a href=\"https:\/\/research.google\/pubs\/pub52060\/\">large, international study<\/a> of trans and non-binary users of consumer technologies and digital health applications to learn how data collection and algorithms used in these technologies can evolve to achieve fairness.\n<\/p>\n<h3>Media<\/h3>\n<p>\nWe partnered with the <a href=\"https:\/\/seejane.org\/\">Geena Davis Institute on Gender in Media<\/a> (GDI) and the <a href=\"https:\/\/sail.usc.edu\/\">Signal Analysis and Interpretation Laboratory<\/a> (SAIL) at the University of Southern California (USC) to <a href=\"https:\/\/blog.google\/technology\/ai\/using-ai-to-study-12-years-of-representation-in-tv\/\">study 12 years of representation in TV<\/a>. Based on an analysis of over 440 hours of TV programming, the <a href=\"https:\/\/seejane.org\/research-informs-empowers\/see-it-be-it-what-families-are-watching-on-tv\/\">report <\/a>highlights findings and brings attention to significant disparities in screen and speaking time for light and dark skinned characters, male and female characters, and younger and older characters. This first-of-its-kind collaboration uses advanced AI models to understand how people-oriented stories are portrayed in media, with the ultimate goal to inspire equitable representation in mainstream media.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEhFFBfa4aGDUivbfevgjbd7V_EinCq4bwnEKAyGJxvH9JR-VPDnVZvNljdinnoR52I9EN6TC5DgrElGuYVzliRxWMElwVfwqHKOwxPGken3UC3togTA1KAHI3qadh2VVWf5haZa9VPMsfvNdluuFgeOlLWMJqrnyp_dkHx2x2yhZcVsROkavdLzs8f2Ng\/s800\/image5.gif\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"450\" data-original-width=\"800\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEhFFBfa4aGDUivbfevgjbd7V_EinCq4bwnEKAyGJxvH9JR-VPDnVZvNljdinnoR52I9EN6TC5DgrElGuYVzliRxWMElwVfwqHKOwxPGken3UC3togTA1KAHI3qadh2VVWf5haZa9VPMsfvNdluuFgeOlLWMJqrnyp_dkHx2x2yhZcVsROkavdLzs8f2Ng\/s16000\/image5.gif\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">MUSE demo Source: Video Collection \/ Getty Images.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p align=\"right\"><a href=\"#top\">Top<\/a><\/p>\n<p><\/p>\n<h2>Plans for 2023 and Beyond<\/h2>\n<p>\nWe\u2019re committed to creating research and products that exemplify positive, inclusive, and safe experiences for everyone. This begins by understanding the many aspects of AI risks and safety inherent in the innovative work that we do, and including diverse sets of voices in coming to this understanding.\n<\/p>\n<ul>\n<li><em>Responsible AI Research Advancements: <\/em>We will strive to understand the implications of the technology that we create, through improved metrics and evaluations, and devise methodology to enable people to use technology to become better world citizens.\n<\/li>\n<li><em>Responsible AI Research in Products:<\/em> As products leverage new AI capabilities for new user experiences, we will continue to collaborate closely with product teams to understand and measure their societal impacts and to develop new modeling techniques that enable the products to uphold <a href=\"https:\/\/ai.google\/principles\/\">Google\u2019s AI Principles<\/a>.\n<\/li>\n<li><em>Tools and Techniques:<\/em> We will develop novel techniques to advance our ability to discover unknown failures, explain model behaviors, and to improve model output through training, responsible generation, and failure mitigation.\n<\/li>\n<li><em>Demonstrating AI\u2019s Social Benefit: <\/em>We plan to expand our efforts on <a href=\"https:\/\/globalgoals.withgoogle.com\/globalgoals\">AI for the Global Goals<\/a>, bringing together research, technology, and funding to accelerate progress on the <a href=\"https:\/\/sdgs.un.org\/goals\">Sustainable Development Goals<\/a>. This commitment will include <a href=\"https:\/\/blog.google\/outreach-initiatives\/google-org\/our-commitment-on-using-ai-to-accelerate-progress-on-global-development-goals\/\">$25 million to support NGOs and social enterprises<\/a>. We will further our work on inclusion and equity by forming more collaborations with community-based experts and impacted communities. This includes continuing the Equitable AI Research Roundtables (EARR), focused on the potential impacts and downstream harms of AI with community based experts from the <a href=\"https:\/\/belonging.berkeley.edu\/\">Othering and Belonging Institute<\/a> at UC Berkeley, <a href=\"https:\/\/www.policylink.org\/\">PolicyLink<\/a>, and Emory University School of Law.\n<\/li>\n<\/ul>\n<p>\nBuilding ML models and products in a responsible and ethical manner is both our core focus and core commitment.\n<\/p>\n<p><\/p>\n<h2>Acknowledgements<\/h2>\n<p>\n<em>This work reflects the efforts from across the Responsible AI and Human-Centered Technology community, from researchers and engineers to product and program managers, all of whom contribute to bringing our work to the AI community. <\/em>\n<\/p>\n<p><\/p>\n<h2>Google Research, 2022 &amp; Beyond<\/h2>\n<p>\nThis was the second blog post in the \u201cGoogle Research, 2022 &amp; Beyond\u201d series. Other posts in this series are listed in the table below:\n<\/p>\n<p><\/p>\n<div style=\"margin-left: 10%; margin-right: 10%; text-align: center;\">\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">* Articles will be linked as they are released.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"http:\/\/ai.googleblog.com\/2023\/01\/google-research-2022-beyond-responsible.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Posted by Marian Croak, VP, Google Research, Responsible AI and Human-Centered Technology The last year showed tremendous<\/p>\n","protected":false},"author":2,"featured_media":99,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-98","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google-ai"],"_links":{"self":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/98","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/comments?post=98"}],"version-history":[{"count":1,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/98\/revisions"}],"predecessor-version":[{"id":3012,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/98\/revisions\/3012"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media\/99"}],"wp:attachment":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media?parent=98"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/categories?post=98"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/tags?post=98"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}