{"id":764,"date":"2023-10-28T02:49:48","date_gmt":"2023-10-28T02:49:48","guid":{"rendered":"https:\/\/todaysainews.com\/index.php\/2023\/10\/28\/ai-for-the-board-game-diplomacy-2\/"},"modified":"2025-04-27T07:32:35","modified_gmt":"2025-04-27T07:32:35","slug":"ai-for-the-board-game-diplomacy-2","status":"publish","type":"post","link":"https:\/\/todaysainews.com\/index.php\/2023\/10\/28\/ai-for-the-board-game-diplomacy-2\/","title":{"rendered":"AI for the board game Diplomacy"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div>\n<div class=\"article-cover\">\n<div class=\"article-cover__header\">\n<p class=\"article-cover__eyebrow glue-label\">Research<\/p>\n<dl class=\"article-cover__meta\">\n<dt class=\"glue-visually-hidden\">Published<\/dt>\n<dd class=\"article-cover__date glue-label\">\n              <time datetime=\"2022-12-06\"><br \/>\n                6 December 2022<br \/>\n              <\/time>\n            <\/dd>\n<dt class=\"glue-visually-hidden\">Authors<\/dt>\n<dd class=\"article-cover__authors\">\n<p data-block-key=\"9efzy\">Yoram Bachrach, J\u00e1nos Kram\u00e1r<\/p>\n<\/dd>\n<\/dl>\n<section class=\"glue-social glue-social--zippy share share--left article-cover__share\" data-glue-expansion-panel-expand-tooltip=\"Share: Expand to see social channels\" data-glue-expansion-panel-collapse-tooltip=\"Share: Hide social channels\" id=\"share-96ccb539-1203-4142-9552-4d7af7008905\">\n<\/section><\/div>\n<\/p><\/div>\n<div class=\"gdm-rich-text rich-text\">\n<p data-block-key=\"la8ai\"><b>Agents cooperate better by communicating and negotiating, and sanctioning broken promises helps keep them honest<\/b><\/p>\n<p data-block-key=\"qek03\">Successful communication and cooperation have been crucial for helping societies advance throughout history. The closed environments of board games can serve as a sandbox for modelling and investigating interaction and communication \u2013 and we can learn a lot from playing them. In our recent paper, <a href=\"https:\/\/www.nature.com\/articles\/s41467-022-34473-5\" rel=\"noopener\" target=\"_blank\">published today in Nature Communications<\/a>, we show how artificial agents can use communication to better cooperate in the board game Diplomacy, a vibrant domain in artificial intelligence (AI) research, known for its focus on alliance building.<\/p>\n<p data-block-key=\"tph8h\">Diplomacy is challenging as it has simple rules but high emergent complexity due to the strong interdependencies between players and its immense action space. To help solve this challenge, we designed negotiation algorithms that allow agents to communicate and agree on joint plans, enabling them to overcome agents lacking this ability.<\/p>\n<p data-block-key=\"9x29i\">Cooperation is particularly challenging when we cannot rely on our peers to do what they promise. We use Diplomacy as a sandbox to explore what happens when agents may deviate from their past agreements. Our research illustrates the risks that emerge when complex agents are able to misrepresent their intentions or mislead others regarding their future plans, which leads to another big question: What are the conditions that promote trustworthy communication and teamwork?<\/p>\n<p data-block-key=\"ainq1\">We show that the strategy of sanctioning peers who break contracts dramatically reduces the advantages they can gain by abandoning their commitments, thereby fostering more honest communication.<\/p>\n<h2 data-block-key=\"1n9tt\">What is Diplomacy and why is it important?<\/h2>\n<p data-block-key=\"lc7sd\">Games such as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Deep_Blue_(chess_computer)\" rel=\"noopener\" target=\"_blank\">chess<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Computer_poker_player\" rel=\"noopener\" target=\"_blank\">poker<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/AlphaGo\" rel=\"noopener\" target=\"_blank\">Go<\/a>, and many <a href=\"https:\/\/en.wikipedia.org\/wiki\/AlphaStar_(software)\" rel=\"noopener\" target=\"_blank\">video games<\/a> have always been fertile ground for AI research. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Diplomacy_(game)\" rel=\"noopener\" target=\"_blank\">Diplomacy<\/a> is a seven-player game of negotiation and alliance formation, played on an old map of Europe partitioned into provinces, where each player controls multiple units (<a href=\"https:\/\/media.wizards.com\/2015\/downloads\/ah\/diplomacy_rules.pdf\" rel=\"noopener\" target=\"_blank\">rules of Diplomacy<\/a>). In the standard version of the game, called Press Diplomacy, each turn includes a negotiation phase, after which all players reveal their chosen moves simultaneously.<\/p>\n<p data-block-key=\"v93m8\">The heart of Diplomacy is the negotiation phase, where players try to agree on their next moves. For example, one unit may support another unit, allowing it to overcome resistance by other units, as illustrated here:<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"hvahy\"><b>Two movement scenarios.<\/b><br \/><b>Left:<\/b> two units (a Red unit in Burgundy and a Blue unit in Gascony) attempt to move into Paris. As the units have equal strength, neither succeeds.<br \/><b>Right:<\/b> the Red unit in Picardy supports the Red unit in Burgundy, overpowering Blue\u2019s unit and allowing the Red unit into Burgundy.<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<p data-block-key=\"0oovj\">Computational approaches to Diplomacy have been researched since the 1980s, many of which were explored on a simpler version of the game called No-Press Diplomacy, where strategic communication between players is not allowed. Researchers have also proposed <a href=\"http:\/\/www.daide.org.uk\/\" rel=\"noopener\" target=\"_blank\">computer-friendly negotiation protocols<\/a>, sometimes called \u201cRestricted-Press\u201d.<\/p>\n<h2 data-block-key=\"6jevz\">What did we study?<\/h2>\n<p data-block-key=\"9vqil\">We use Diplomacy as an analog to real-world negotiation, providing methods for AI agents to coordinate their moves. We take <a href=\"https:\/\/www.deepmind.com\/publications\/learning-to-play-no-press-diplomacy-with-best-response-policy-iteration\" rel=\"noopener\" target=\"_blank\">our non-communicating Diplomacy agents<\/a> and augment them to play Diplomacy with communication by giving them a protocol for negotiating contracts for a joint plan of action. We call these augmented agents Baseline Negotiators, and they are bound by their agreements.<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"4ubr7\"><b>Diplomacy contracts.<\/b><br \/><b>Left:<\/b> a restriction allowing only certain actions to be taken by the Red player (they are not allowed to move from Ruhr to Burgundy, and must move from Piedmont to Marseilles).<br \/><b>Right:<\/b> A contract between the Red and Green players, which places restrictions on both sides.<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<p data-block-key=\"vulk6\">We consider two protocols: the Mutual Proposal Protocol and the Propose-Choose Protocol, discussed in detail in <a href=\"https:\/\/www.nature.com\/articles\/s41467-022-34473-5\" rel=\"noopener\" target=\"_blank\">the full paper<\/a>. Our agents apply algorithms that identify mutually beneficial deals by simulating how the game might unfold under various contracts. We use the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cooperative_bargaining#:~:text=Nash%20bargaining%20game,-John%20Forbes%20Nash&amp;text=His%20solution%20is%20called%20the,and%20independence%20of%20irrelevant%20alternatives.\" rel=\"noopener\" target=\"_blank\">Nash Bargaining Solution<\/a> from <a href=\"https:\/\/en.wikipedia.org\/wiki\/Game_theory\" rel=\"noopener\" target=\"_blank\">game theory<\/a> as a principled foundation for identifying high-quality agreements. The game may unfold in many ways depending on the actions of players, so our agents use Monte-Carlo simulations to see what might happen in the next turn.<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"akleh\">Simulating next states given an agreed contract. Left: current state in a part of the board, including a contract agreed between the Red and Green players. Right: multiple possible next states.<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<p data-block-key=\"ff8bt\">Our experiments show that our negotiation mechanism allows Baseline Negotiators to significantly outperform baseline non-communicating agents.<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"hy316\">Baseline Negotiators significantly outperform non-communicating agents. Left: The Mutual Proposal Protocol. Right: The Propose-Choose Protocol. \u201cNegotiator advantage\u201d is the ratio of win rates between the communicating agents and the non-communicating agents.<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<h2 data-block-key=\"dkkms\">Agents breaking agreements<\/h2>\n<p data-block-key=\"c1vmi\">In Diplomacy, agreements made during negotiation are not binding (communication is \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Cheap_talk#:~:text=In%20game%20theory%2C%20cheap%20talk,the%20state%20of%20the%20world.\" rel=\"noopener\" target=\"_blank\">cheap talk&#8217;<\/a>&#8216;). But what happens when agents who agree to a contract in one turn deviate from it the next? In many real-life settings people agree to act in a certain way, but fail to meet their commitments later on. To enable cooperation between AI agents, or between agents and humans, we must examine the potential pitfall of agents strategically breaking their agreements, and ways to remedy this problem. We used Diplomacy to study how the ability to abandon our commitments erodes trust and cooperation, and identify conditions that foster honest cooperation.<\/p>\n<p data-block-key=\"qg83n\">So we consider Deviator Agents, which overcome honest Baseline Negotiators by deviating from agreed contracts. Simple Deviators simply \u201cforget\u201d they agreed to a contract and move however they wish. Conditional Deviators are more sophisticated, and optimise their actions assuming that other players who accepted a contract will act in accordance with it.<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"fya5e\">All types of our Communicating Agents. Under the green grouping terms, each blue block represents a specific agent algorithm.<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<p data-block-key=\"5inp6\">We show that Simple and Conditional Deviators significantly outperform Baseline Negotiators, the Conditional Deviators overwhelmingly so.<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"xgfi0\">Deviator Agents versus Baseline Negotiator Agents. Left: The Mutual Proposal Protocol. Right: The Propose-Choose Protocol. \u201cDeviator advantage\u201d is the ratio of win rates between the Deviator Agents over the Baseline Negotiators.<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<h2 data-block-key=\"0m4py\">Encouraging agents to be honest<\/h2>\n<p data-block-key=\"mpuil\">Next we tackle the deviation problem using Defensive Agents, which respond adversely to deviations. We investigate Binary Negotiators, who simply cut off communications with agents who break an agreement with them. But shunning is a mild reaction, so we also develop Sanctioning Agents, who don\u2019t take betrayal lightly, but instead modify their goals to actively attempt to lower the deviator&#8217;s value \u2013 an opponent with a grudge! We show that both types of Defensive Agents reduce the advantage of deviation, particularly Sanctioning Agents.<\/p>\n<\/div>\n<figure class=\"single-media single-media--inline\"><figcaption class=\"single-media__caption\">\n<p data-block-key=\"buyqs\">Non-Deviator Agents (Baseline Negotiators, Binary Negotiators, and Sanctioning Agents) playing against Conditional Deviators. Left: Mutual Proposal Protocol. Right: Propose-Choose Protocol. \u201cDeviator advantage\u201d values lower than 1 indicate a Defensive Agent outperforms a Deviator Agent. A population of Binary Negotiators (blue) reduces the advantage of Deviators compared with a population of Baseline Negotiators (grey).<\/p>\n<\/figcaption><\/figure>\n<div class=\"gdm-rich-text rich-text\">\n<p data-block-key=\"4lccm\">Finally, we introduce Learned Deviators, who adapt and optimise their behaviour against Sanctioning Agents over multiple games, trying to render the above defences less effective. A Learned Deviator will only break a contract when the immediate gains from deviation are high enough and the ability of the other agent to retaliate is low enough. In practice, Learned Deviators occasionally break contracts late in the game, and in doing so achieve a slight advantage over Sanctioning Agents. Nevertheless, such sanctions drive the Learned Deviator to honour more than 99.7% of its contracts.<\/p>\n<p data-block-key=\"yvni3\">We also examine possible learning dynamics of sanctioning and deviation: what happens when Sanctioning Agents may also deviate from contracts, and the potential incentive to stop sanctioning when this behaviour is costly. Such issues can gradually erode cooperation, so additional mechanisms such as repeating interaction across multiple games or using a trust and reputation systems may be needed.<\/p>\n<p data-block-key=\"cmnti\">Our paper leaves many questions open for future research: Is it possible to design more sophisticated protocols to encourage even more honest behaviour? How could one handle combining communication techniques and imperfect information? Finally, what other mechanisms could deter the breaking of agreements? Building fair, transparent and trustworthy AI systems is an extremely important topic, and it is a key part of DeepMind\u2019s mission. Studying these questions in sandboxes like Diplomacy helps us to better understand tensions between cooperation and competition that might exist in the real world. Ultimately, we believe tackling these challenges allows us to better understand how to develop AI systems in line with society\u2019s values and priorities.<\/p>\n<p data-block-key=\"b8aw2\">Read our full paper <a href=\"https:\/\/www.nature.com\/articles\/s41467-022-34473-5\" rel=\"noopener\" target=\"_blank\">here<\/a>.<\/p>\n<\/div>\n<aside class=\"notes\">\n<div class=\"glue-page\">\n<div class=\"gdm-rich-text notes__inner\">\n<h2 data-block-key=\"bjckn\">Acknowledgements<\/h2>\n<p data-block-key=\"7wqxo\">We would like to thank Will Hawkins, Aliya Ahmad, Dawn Bloxwich, Lila Ibrahim, Julia Pawar, Sukhdeep Singh, Tom Anthony, Kate Larson, Julien Perolat, Marc Lanctot, Edward Hughes, Richard Ives, Karl Tuyls, Satinder Singh and Koray Kavukcuoglu for their support and advice throughout the work.<\/p>\n<h2 data-block-key=\"oglgj\">Full paper authors<\/h2>\n<p data-block-key=\"1k3ox\">J\u00e1nos Kram\u00e1r, Tom Eccles, Ian Gemp, Andrea Tacchetti, Kevin R. McKee, Mateusz Malinowski, Thore Graepel, Yoram Bachrach.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/aside>\n<aside class=\"related-posts\">\n<div class=\"glue-page\">\n<ul class=\"glue-grid glue-cards cards\">\n<li data-gdm-filter-category=\"research\" class=\"glue-grid__col glue-grid__col--span-6&#10;                            glue-grid__col--span-4-lg&#10;                            glue-grid__col--span-4-xl\">\n    <a class=\"glue-card card\" data-gtm-tag=\"card-selection\" href=\"https:\/\/deepmind.google\/discover\/blog\/alphazero-shedding-new-light-on-chess-shogi-and-go\/\"><\/p>\n<div class=\"glue-card__inner\">\n<picture class=\"glue-card__asset\"><source media=\"(min-width: 1024px)\" type=\"image\/webp\" width=\"400\" height=\"225\" srcset=\"https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w400-h225-n-nu-rw 1x, https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w800-h450-n-nu-rw 2x\"\/><source media=\"(min-width: 600px)\" type=\"image\/webp\" width=\"450\" height=\"255\" srcset=\"https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w450-h255-n-nu-rw 1x, https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w900-h510-n-nu-rw 2x\"\/><source type=\"image\/webp\" width=\"356\" height=\"200\" srcset=\"https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w356-h200-n-nu-rw 1x, https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w712-h400-n-nu-rw 2x\"\/><img decoding=\"async\" alt=\"\" height=\"225\" loading=\"lazy\" role=\"presentation\" src=\"https:\/\/lh3.googleusercontent.com\/-D8omS7Dxx5A922zF-vif_VT0LKM8MBITCkf60Njvw9Uy076NNl_Y2BGKYehyDLi9BdgEmAkPGo1S3uXQUsjqzaGZ_FY35PN2opLP2_Hud0F6qk9bA=w400-h225-n-nu\" width=\"400\"\/>\n    <\/picture>\n<div class=\"glue-card__content\">\n<p class=\"glue-label\">Research<\/p>\n<p class=\"glue-headline glue-headline--headline-5\">AlphaZero: Shedding new light on chess, shogi, and Go<\/p>\n<p class=\"glue-card__description\">In late 2017 we introduced AlphaZero, a single system that taught itself from scratch how to master the games of chess, shogi (Japanese chess), and Go, beating a world-champion program in each&#8230;<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<p>    <\/a>\n  <\/li>\n<li data-gdm-filter-category=\"research\" class=\"glue-grid__col glue-grid__col--span-6&#10;                            glue-grid__col--span-4-lg&#10;                            glue-grid__col--span-4-xl\">\n    <a class=\"glue-card card\" data-gtm-tag=\"card-selection\" href=\"https:\/\/deepmind.google\/discover\/blog\/muzero-mastering-go-chess-shogi-and-atari-without-rules\/\"><\/p>\n<div class=\"glue-card__inner\">\n<picture class=\"glue-card__asset\"><source media=\"(min-width: 1024px)\" type=\"image\/webp\" width=\"400\" height=\"225\" srcset=\"https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w400-h225-n-nu-rw 1x, https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w800-h450-n-nu-rw 2x\"\/><source media=\"(min-width: 600px)\" type=\"image\/webp\" width=\"450\" height=\"255\" srcset=\"https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w450-h255-n-nu-rw 1x, https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w900-h510-n-nu-rw 2x\"\/><source type=\"image\/webp\" width=\"356\" height=\"200\" srcset=\"https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w356-h200-n-nu-rw 1x, https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w712-h400-n-nu-rw 2x\"\/><img decoding=\"async\" alt=\"\" height=\"225\" loading=\"lazy\" role=\"presentation\" src=\"https:\/\/lh3.googleusercontent.com\/RjsRRMbalWxDjVAQwnazYTcbd3hDUC92lqfO22mFc42L8te_sVlAcywlDYJ6B0sU3i0uSuU6KFDsVCCGNsy8GIeSs1FqKTnAdI1_7W4xgCJOjF7bMQ=w400-h225-n-nu\" width=\"400\"\/>\n    <\/picture>\n<div class=\"glue-card__content\">\n<p class=\"glue-label\">Research<\/p>\n<p class=\"glue-headline glue-headline--headline-5\">MuZero: Mastering Go, chess, shogi and Atari without rules<\/p>\n<p class=\"glue-card__description\">In 2016, we introduced AlphaGo, the first artificial intelligence (AI) program to defeat humans at the ancient game of Go. Two years later, its successor &#8211; AlphaZero &#8211; learned from scratch to&#8230;<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<p>    <\/a>\n  <\/li>\n<\/ul><\/div>\n<\/aside><\/div>\n<p>[ad_2]<br \/>\n<br \/><a href=\"https:\/\/deepmind.google\/discover\/blog\/ai-for-the-board-game-diplomacy\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Research Published 6 December 2022 Authors Yoram Bachrach, J\u00e1nos Kram\u00e1r Agents cooperate better by communicating and negotiating,<\/p>\n","protected":false},"author":2,"featured_media":765,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[],"class_list":["post-764","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-deepmind-ai"],"_links":{"self":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/764","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/comments?post=764"}],"version-history":[{"count":1,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/764\/revisions"}],"predecessor-version":[{"id":2692,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/posts\/764\/revisions\/2692"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media\/765"}],"wp:attachment":[{"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/media?parent=764"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/categories?post=764"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/todaysainews.com\/index.php\/wp-json\/wp\/v2\/tags?post=764"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}