{"id":550,"date":"2023-05-18T19:09:53","date_gmt":"2023-05-18T19:09:53","guid":{"rendered":"https:\/\/todaysainews.com\/index.php\/2023\/05\/18\/pair-google-ai-blog\/"},"modified":"2025-04-27T07:33:33","modified_gmt":"2025-04-27T07:33:33","slug":"pair-google-ai-blog","status":"publish","type":"post","link":"https:\/\/todaysainews.com\/index.php\/2023\/05\/18\/pair-google-ai-blog\/","title":{"rendered":"PAIR \u2013 Google AI Blog"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"post-body-2749680625311121514\">\n<span class=\"byline-author\">Posted by Lucas Dixon and Michael Terry, co-leads, PAIR, Google Research<\/span><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjDUj2YMMcVzCHuAQ9TYS3IkauQbh9Tti-nZl6LZOMwwieYdyJM7DHShetIblRhW0hvTaPvDmK2f0SDu9XcLCVg0780I3GnYU0FrA27s0-MiQXQ5xPUCGqTSx-7KNJLwrlczkco2Ql5KzMeVHNXBl2oeaN0gMCPi7A2Bpc6eDpzjwZIHoNMiWLfQqL2vQ\/s724\/image3.gif\" style=\"display: none;\"\/><\/p>\n<p>\nPAIR (People + AI Research) first <a href=\"https:\/\/blog.google\/technology\/ai\/pair-people-ai-research-initiative\/\">launched<\/a> in 2017 with the belief that \u201cAI can go much further \u2014 and be more useful to all of us \u2014 if we build systems with people in mind at the start of the process.\u201d We continue to focus on making AI more understandable, interpretable, fun, and usable by more people around the world. It\u2019s a mission that is particularly timely given the emergence of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Generative_artificial_intelligence\">generative AI<\/a> and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Chatbot\">chatbots<\/a>.\n<\/p>\n<p><a name=\"more\"\/><\/p>\n<p>\nToday, PAIR is part of the <a href=\"https:\/\/research.google\/teams\/responsible-ai\/\">Responsible AI and Human-Centered Technology<\/a> team within Google Research, and our work spans this larger research space: We advance <a href=\"https:\/\/pair.withgoogle.com\/research\/\">foundational research<\/a> on human-AI interaction (HAI) and machine learning (ML); we publish educational materials, including the <a href=\"https:\/\/pair.withgoogle.com\/guidebook\/\">PAIR Guidebook<\/a> and <a href=\"https:\/\/pair.withgoogle.com\/explorables\/\">Explorables<\/a> (such as the recent Explorable looking at <a href=\"https:\/\/pair.withgoogle.com\/explorables\/uncertainty-ood\/\">how and why models sometimes make incorrect predictions confidently<\/a>); and we develop software tools like the <a href=\"https:\/\/pair-code.github.io\/lit\/\">Learning Interpretability Tool<\/a> to help people understand and debug ML behaviors. Our inspiration this year is &#8220;changing the way people think about what <em>THEY<\/em> can do with AI.\u201d This vision is inspired by the rapid emergence of generative AI technologies, such as large language models (LLMs) that power chatbots like <a href=\"https:\/\/bard.google.com\/\">Bard<\/a>, and new generative media models like Google&#8217;s <a href=\"https:\/\/imagen.research.google\/\">Imagen<\/a>, <a href=\"https:\/\/sites.research.google\/parti\/\">Parti<\/a>, and <a href=\"https:\/\/google-research.github.io\/seanet\/musiclm\/examples\/\">MusicLM<\/a>. In this blog post, we review recent PAIR work that is changing the way we engage with AI.\n<\/p>\n<p><\/p>\n<h2>Generative AI research<\/h2>\n<p>\nGenerative AI is creating a lot of excitement, and PAIR is involved in a range of related research, from <a href=\"https:\/\/arxiv.org\/abs\/2304.03442\">using language models to simulate complex community behaviors<\/a> to studying how artists adopted generative image models like <a href=\"https:\/\/imagen.research.google\/\">Imagen<\/a> and <a href=\"https:\/\/sites.research.google\/parti\/\">Parti<\/a>. These latter &#8220;text-to-image&#8221; models let a person input a text-based description of an image for the model to generate (e.g., &#8220;a gingerbread house in a forest in a cartoony style&#8221;). In a forthcoming paper titled \u201c<a href=\"https:\/\/arxiv.org\/abs\/2303.12253\">The Prompt Artists<\/a>\u201d (to appear in <a href=\"https:\/\/cc.acm.org\/2023\/\">Creativity and Cognition 2023<\/a>), we found that users of generative image models strive not only to create beautiful images, but also to create unique, innovative styles. To help achieve these styles, some would even seek unique vocabulary to help develop their visual style. For example, they may visit architectural blogs to learn what domain-specific vocabulary they can adopt to help produce distinctive images of buildings.\n<\/p>\n<p>\nWe are also researching solutions to challenges faced by prompt creators who, with generative AI, are essentially programming without using a programming language. As an example, we developed <a href=\"https:\/\/storage.googleapis.com\/pub-tools-public-publication-data\/pdf\/be5d0f3efae124b5eefe0ff3f32c7ffcf1daf421.pdf\">new methods<\/a> for extracting semantically meaningful structure from natural language prompts. We have applied these structures to prompt editors to provide features similar to those found in other programming environments, such as semantic highlighting, autosuggest, and structured data views.\n<\/p>\n<p>\nThe growth of generative LLMs has also opened up new techniques to solve important long-standing problems. <a href=\"https:\/\/arxiv.org\/abs\/2302.06541\">Agile classifiers<\/a> are one approach we\u2019re taking to leverage the semantic and syntactic strengths of LLMs to solve classification problems related to safer online discourse, such as nimbly blocking newer types of toxic language as quickly as it may evolve online. The big advance here is the ability to develop high quality classifiers from very small datasets \u2014 as small as 80 examples. This suggests a positive future for online discourse and better moderation of it: instead of collecting millions of examples to attempt to create universal safety classifiers for all use cases over months or years, more agile classifiers might be created by individuals or small organizations and tailored for their specific use cases, and iterated on and adapted in the time-span of a day (e.g., to block a new kind of harassment being received or to correct unintended biases in models). As an example of their utility, these methods recently <a href=\"https:\/\/www.aclweb.org\/portal\/content\/semeval-2023-task-10-explainable-detection-online-sexism-edos\">won a SemEval competition<\/a> to identify and explain sexism.\n<\/p>\n<p>\nWe&#8217;ve also developed <a href=\"https:\/\/arxiv.org\/abs\/2303.08114\">new state-of-the-art explainability methods<\/a> to identify the role of training data on model behaviors and misbehaviours. By <a href=\"https:\/\/arxiv.org\/abs\/2302.06598\">combining training data attribution methods with agile classifiers<\/a>, we also found that we can identify mislabelled training examples. This makes it possible to reduce the noise in training data, leading to significant improvements on model accuracy.\n<\/p>\n<p>\nCollectively, these methods are critical to help the scientific community improve generative models. They provide techniques for fast and effective content moderation and dialogue safety methods that help support creators whose content is the basis for generative models&#8217; amazing outcomes. In addition, they provide direct tools to help debug model misbehavior which leads to better generation.\n<\/p>\n<p><\/p>\n<h2>Visualization and education<\/h2>\n<p>\nTo lower barriers in understanding ML-related work, we regularly design and publish highly visual, interactive online essays, called <a href=\"https:\/\/pair.withgoogle.com\/explorables\/\">AI Explorables<\/a>, that provide accessible, hands-on ways to learn about key ideas in ML. For example, we recently published new AI Explorables on the topics of model confidence and unintended biases. In our latest Explorable, \u201c<a href=\"https:\/\/pair.withgoogle.com\/explorables\/uncertainty-ood\/\">From Confidently Incorrect Models to Humble Ensembles<\/a>,\u201d we discuss the problem with model confidence: models can sometimes be <em>very<\/em> confident in their predictions\u2026 and yet completely incorrect. Why does this happen and what can be done about it? Our Explorable walks through these issues with interactive examples and shows how we can build models that have more appropriate confidence in their predictions by using a technique called <a href=\"https:\/\/ai.googleblog.com\/2021\/11\/model-ensembles-are-faster-than-you.html\">ensembling<\/a>, which works by averaging the outputs of multiple models. Another Explorable, \u201c<a href=\"https:\/\/pair.withgoogle.com\/explorables\/saliency\/\">Searching for Unintended Biases with Saliency<\/a>\u201d, shows how spurious correlations can lead to unintended biases \u2014 and how techniques such as saliency maps can detect some biases in datasets, with the caveat that it can be difficult to see bias when it\u2019s more subtle and sporadic in a training set.\n<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjDUj2YMMcVzCHuAQ9TYS3IkauQbh9Tti-nZl6LZOMwwieYdyJM7DHShetIblRhW0hvTaPvDmK2f0SDu9XcLCVg0780I3GnYU0FrA27s0-MiQXQ5xPUCGqTSx-7KNJLwrlczkco2Ql5KzMeVHNXBl2oeaN0gMCPi7A2Bpc6eDpzjwZIHoNMiWLfQqL2vQ\/s724\/image3.gif\" imageanchor=\"1\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"543\" data-original-width=\"724\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjDUj2YMMcVzCHuAQ9TYS3IkauQbh9Tti-nZl6LZOMwwieYdyJM7DHShetIblRhW0hvTaPvDmK2f0SDu9XcLCVg0780I3GnYU0FrA27s0-MiQXQ5xPUCGqTSx-7KNJLwrlczkco2Ql5KzMeVHNXBl2oeaN0gMCPi7A2Bpc6eDpzjwZIHoNMiWLfQqL2vQ\/s16000\/image3.gif\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">PAIR designs and publishes AI Explorables, interactive essays on timely topics and new methods in ML research, such as \u201c<a href=\"https:\/\/pair.withgoogle.com\/explorables\/uncertainty-ood\/\">From Confidently Incorrect Models to Humble Ensembles<\/a>,\u201d which looks at how and why models offer incorrect predictions with high confidence, and how \u201censembling\u201d the outputs of many models can help avoid this.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Transparency and the Data Cards Playbook<\/h2>\n<p>\nContinuing to advance our goal of helping people to understand ML, we promote transparent documentation. In the past, PAIR and Google Cloud developed <a href=\"http:\/\/modelcards.withgoogle.com\">model cards<\/a>. Most recently, we presented <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3531146.3533231\">our work on Data Cards<\/a> at <a href=\"https:\/\/facctconference.org\/2022\/\">ACM FAccT\u201922<\/a> and open-sourced the <a href=\"https:\/\/sites.research.google\/datacardsplaybook\/\">Data Cards Playbook<\/a>, a joint effort with the <a href=\"https:\/\/ai.googleblog.com\/2023\/04\/responsible-ai-at-google-research.html\">Technology, AI, Society, and Culture team<\/a> (TASC). <a href=\"https:\/\/sites.research.google\/datacardsplaybook\/\">The Data Cards Playbook<\/a> is a toolkit of participatory activities and frameworks to help teams and organizations overcome obstacles when setting up a transparency effort. It was created using an iterative, multidisciplinary approach rooted in the experiences of over 20 teams at Google, and comes with four modules: Ask, Inspect, Answer and Audit. These modules contain a variety of resources that can help you customize Data Cards to your organization\u2019s needs:\n<\/p>\n<ul>\n<li>18 Foundations: Scalable frameworks that anyone can use on any dataset type\n<\/li>\n<li>19 Transparency Patterns: Evidence-based guidance to produce high-quality Data Cards at scale\n<\/li>\n<li>33 Participatory Activities: Cross-functional workshops to navigate transparency challenges for teams\n<\/li>\n<li>Interactive Lab: Generate interactive Data Cards from markdown in the browser\n<\/li>\n<\/ul>\n<p>\nThe Data Cards Playbook is accessible as a learning pathway for startups, universities, and other research groups.\n<\/p>\n<p><\/p>\n<h2>Software Tools<\/h2>\n<p>\nOur team thrives on creating tools, toolkits, libraries, and visualizations that expand access and improve understanding of ML models. One such resource is <a href=\"https:\/\/knowyourdata.withgoogle.com\/\">Know Your Data<\/a>, which allows researchers to test a model\u2019s performance for various scenarios through interactive qualitative exploration of datasets that they can use to find and fix unintended dataset biases.\n<\/p>\n<p>\nRecently, PAIR released a new version of the <a href=\"https:\/\/pair-code.github.io\/lit\/\">Learning Interpretability Tool<\/a> (LIT) for model debugging and understanding. LIT v0.5 provides support for image and tabular data, new interpreters for tabular feature attribution, a &#8220;Dive&#8221; visualization for faceted data exploration, and performance improvements that allow LIT to scale to 100k dataset entries. You can find the <a href=\"https:\/\/github.com\/PAIR-code\/lit\/blob\/main\/RELEASE.md\">release notes<\/a> and <a href=\"https:\/\/pair-code.github.io\/lit\/\">code<\/a> on GitHub.\n<\/p>\n<p>\nPAIR has also contributed to <a href=\"https:\/\/developers.googleblog.com\/2023\/03\/announcing-palm-api-and-makersuite.html\">MakerSuite<\/a>, a tool for rapid prototyping with LLMs using prompt programming. MakerSuite builds on our earlier research on <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3491101.3503564\">PromptMaker<\/a>, which won an honorable mention at <a href=\"https:\/\/chi2022.acm.org\/\">CHI 2022. <\/a>MakerSuite lowers the barrier to prototyping ML applications by broadening the types of people who can author these prototypes and by shortening the time spent prototyping models from months to minutes.\u00a0<\/p>\n<table align=\"center\" cellpadding=\"0\" cellspacing=\"0\" class=\"tr-caption-container\" style=\"margin-left: auto; margin-right: auto;\">\n<tbody>\n<tr>\n<td style=\"text-align: center;\"><a href=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjT-pO6cN7vWIGTcUpC8mSbitlIn8zz0sYOyikxZbwU4-lNXoQw8FGP-LnN2wdJiTzpZsY2rl9Pw-CrZw7N2ZOZqwIM_BAWIQ0tWP_7FI88krRedQUXQ1cz_Jx_WBottMukv-9rIzUJFlFMzgL9dVTsDycSd7L2TbYSsmUCp-xIMif0rhtASefVZZXOSQ\/s1216\/reversedictionary.png\" imageanchor=\"1\" style=\"margin-left: auto; margin-right: auto;\"><img decoding=\"async\" border=\"0\" data-original-height=\"918\" data-original-width=\"1216\" src=\"https:\/\/blogger.googleusercontent.com\/img\/b\/R29vZ2xl\/AVvXsEjT-pO6cN7vWIGTcUpC8mSbitlIn8zz0sYOyikxZbwU4-lNXoQw8FGP-LnN2wdJiTzpZsY2rl9Pw-CrZw7N2ZOZqwIM_BAWIQ0tWP_7FI88krRedQUXQ1cz_Jx_WBottMukv-9rIzUJFlFMzgL9dVTsDycSd7L2TbYSsmUCp-xIMif0rhtASefVZZXOSQ\/s16000\/reversedictionary.png\"\/><\/a><\/td>\n<\/tr>\n<tr>\n<td class=\"tr-caption\" style=\"text-align: center;\">A screenshot of MakerSuite, a tool for rapidly prototyping new ML models using prompt-based programming, which grew out of PAIR&#8217;s prompt programming research.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><\/p>\n<h2>Ongoing work<\/h2>\n<p>\nAs the world of AI moves quickly ahead, PAIR is excited to continue to develop new tools, research, and educational materials to help change the way people think about what THEY can do with AI.\n<\/p>\n<p>\nFor example, we recently conducted <a href=\"https:\/\/savvaspetridis.github.io\/papers\/promptinfuser.pdf\">an exploratory study<\/a> with five designers (presented at <a href=\"https:\/\/chi2023.acm.org\/\">CHI<\/a> this year) that looks at how people with no ML programming experience or training can use prompt programming to quickly prototype functional user interface mock-ups. This prototyping speed can help inform designers on how to integrate ML models into products, and enables them to conduct user research sooner in the product design process.\n<\/p>\n<p>\nBased on this study, PAIR\u2019s researchers built <a href=\"https:\/\/savvaspetridis.github.io\/papers\/promptinfuser.pdf\">PromptInfuser<\/a>, a design tool plugin for authoring LLM-infused mock-ups. The plug-in introduces two novel LLM-interactions: input-output, which makes content interactive and dynamic, and frame-change, which directs users to different frames depending on their natural language input. The result is more tightly integrated UI and ML prototyping, all within a single interface.\n<\/p>\n<p>\nRecent advances in AI represent a significant shift in how easy it is for researchers to customize and control models for their research objectives and goals.These capabilities are transforming the way we think about interacting with AI, and they create lots of new opportunities for the research community. PAIR is excited about how we can leverage these capabilities to make AI easier to use for more people.\n<\/p>\n<p><\/p>\n<h2>Acknowledgements <\/h2>\n<p>\n  <em>Thanks to everyone in PAIR and to all our collaborators. <\/em>\n<\/p>\n<\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"http:\/\/ai.googleblog.com\/2023\/05\/responsible-ai-at-google-research-pair.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Posted by Lucas Dixon and Michael Terry, co-leads, PAIR, Google Research PAIR (People + AI Research) first<\/p>\n","protected":false},"author":2,"featured_media":551,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-550","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google-ai"],"_links":{"self":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/550","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/comments?post=550"}],"version-history":[{"count":1,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/550\/revisions"}],"predecessor-version":[{"id":2797,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/550\/revisions\/2797"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media\/551"}],"wp:attachment":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media?parent=550"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/categories?post=550"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/tags?post=550"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}