{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22
23
24
25
26
27
28
News Every Day |

Dealing With The Artificial General Intelligence Brahmastra – Analysis

By Col Vivek Chadha (Retd)

The Brahmastra is considered the most powerful weapon in the Indian epics, such as the Ramayana and the Mahabharata. A handful of warriors had mastered its employment in battle. This included Rama, Drona, Arjuna and Ashwatthama, to name a few. The knowledge of the weapon was imparted to a select warrior considered worthy of the responsibility. Drona, Ashwatthama’s father, chose his son as a recipient. This, as events proved, was a wrong choice.

Ashwatthama not only employed the weapon in anger against an innocent child in his mother’s womb, but he also failed to exercise control when asked to withdraw it. The one wielding the weapon was also prohibited from using it against an adversary unfamiliar with its impact and incapable of neutralising it. Yet, Acharya Drona used the weapon during the 18-day war against the Pandava army. This led Dhrishtadyumna, Droupadi’s brother and the commander of the Pandava Army, to accuse him of adharma. After Drona was killed by deception, Dhrishtadyumna justified the act, citing Drona’s violation of the rules of war.

Is Artificial General Intelligence (AGI) the new-age Brahmastra? Can lessons derived from the Mahabharata guide us in dealing with a technology that might come to represent unmatched power and influence?

Road to AGI

If the Mahabharata represents the pinnacle of philosophical and strategic thought among all epics, with its timeless wisdom, Artificial General Intelligence (AGI) will sooner rather than later become that one source that will supersede the brightest room full of Nobel Laureates. Much like the Brahmastra, it will represent the pinnacle of technological prowess, capable of protecting against the gravest threat and, simultaneously, causing widespread upheaval. Together, both describe the acquisition, employment and need for control over immeasurable power that can have a profound impact on their respective eras.

In its earlier avatar, AI systems have already beaten the best players at the most complex game—Go.[1] AI ensured that, in 2024, the Nobel Prize in Chemistry was awarded, among others, to a computer scientist, Demis Hassabis, for using AlphaFold to solve a five-decade-old challenge in biological computational protein design.[2] AI is already proving to be a blessing for endeavours in the medical sciences, possibly offering a cure for cancer in the future. Why then should its lightning-fast evolution to AGI become a concern?

The emergence of Artificial Intelligence (AI) and its aspirational eventuality, AGI, poses a dilemma not only for security-related issues but also for the future of humanity across every sphere of economic and social life. This dilemma stems from the immense power that AGI will represent. More importantly, it may no longer remain a tool in human hands and under human control. Instead, it could become an autonomous machine that replaces human decision-making on critical issues.

AI’s impact is already being felt on more routine job responsibilities. Anthropic CEO Dario Amedei warned that half of all white-collar jobs in the next five years will be wiped out by AI.[3] Similarly, Google in February 2026 reportedly announced voluntary exit packages to more fully “commit to its AI-driven future”, as per its Chief Business Officer.[4] Beyond AI’s social and economic impact, which is becoming evident, its future manifestation as AGI emerges as the bigger concern.

At this juncture, in addition to being more ‘intelligent’ than humans, it will also develop the ability to self-learn. This implies that AGI will teach itself to improve rapidly, which seems a positive step, only until we realise that, having lost control over the trajectory of this learning curve, its future direction could become hazy and unpredictable.[5] This includes its ability to self-preserve, replicate and look after what it perceives as its self-interest.

In other words, AI, which until now served human interests, or at least what was perceived as human interest, will now have an ‘independent’ view based on its self-learning journey that would bring about alterations in its security protocols on self-preservation, possibly through replication, deceit, deception, possibly, along with the future of the planet as AGI or its successors saw in the best interests of all beings.

In an ideal scenario, this suggests that AGI will help create a better world than the parochial, self-serving interests of humans. On the contrary, even a slight possibility that humans become the ‘other’ or that the journey towards reaching AGI and beyond is guided by ideas of domination, control and power, and that its result could become catastrophic.[6] And here lies the dilemma of AI’s fast progression towards an uncertain future. How can this uncertain future be made more predictable? And does ancient wisdom hold any lessons in this regard? This brings us back to the Mahabharata and its lessons.

Lessons from the Mahabharata

The Kuru dynasty was guided by some of the finest minds advising the king, Dhritarashtra. This included Bhishma, Vidura, Sanjaya and Kripacharya, among several others. Yet, they witnessed a horrific war, causing widespread death and destruction. What drove the dynasty and, by implication, some of the finest minds in history that came up with the most profound intellectual guidance, to a course programmed for self-destruction?

The simple answer to this question is greed for power and possessions, accompanied by the belief that the gamble for victory is well worth the risk, irrespective of the collateral damage. The Mahabharata saw Duryodhana refusing to heed to sane advice. It also witnessed influential figures who had the power to stop the impending carnage, dither rather than intervene. Once the war began, despite agreements to the contrary, the rules of combat were sidelined in the quest for victory. The downward spiral culminated in the decision to employ the Brahmastra, the most potent weapon of its time, which could only be controlled by a select few. Ashwatthama fired it in frustration and anger. In retaliation, Arjuna also fired the weapon. Realising the devastating implications, Arjuna recalled it. However, Ashwatthama did not possess the same skill. The weapon destroyed its target and needed divine intervention to offset its deathly impact.

This brief account from the Mahabharata holds invaluable lessons for humanity. The ability to have a profound influence has repeatedly emerged in several forms, creating a perceptible power differential. This power, derived from economic, military, or knowledge-based advantage, had the inherent capacity to serve society’s larger good while simultaneously unleashing a destructive streak. The Mahabharata suggests that Dharma was a guide for rulers and society. It created what can today be called a rule-based order. The wise were employed to interpret dilemmas that saw conflicts of interest or lacunae in understanding. Power was bestowed upon those who displayed the maturity and sagacity to uphold the tenets of dharma. Imparting the knowledge of the Brahmastrato Arjuna was therefore justified, and to Ashwatthama, a case of misjudgement in human behaviour.

The Challenge of AGI

The world is experiencing a period that precedes the firing of the Brahmastra, the equivalent of AGI. The onset of AGI is likely to be accompanied by a degree of autonomy that may take away human control over its application. In 2023, the foremost and finest scientists, including some who were responsible for its present form and possibly its future, signed a one-line open letter warning of AI’s potential consequences. The statement read, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[7]

This is very similar to repeated caution from those who better understand AI’s developmental cycle. There may be differences in predicting the date of AGI’s final introduction. However, there is no difference regarding its eventual realisation. Unfortunately, unlike the Mahabharata, the naysayers do not have the control to stop the unregulated use of AGI. Nor will divine intervention come to the aid of the AGI Brahmastra, especially if an Ashwatthama fires it.

The current trajectory of AGI development is moving so quickly that its achievement is inevitable. This journey will reveal discoveries and capabilities once considered science fiction. It could simultaneously create challenges that could get obfuscated by the hunger for scientific discovery and the quest for strategic dominance. Under such circumstances, how can it be ensured that Ashwatthama does not get access to the Brahmastra?

The easier answer is to establish the bottom line for its development. AGI should not come to fruition without someone taking responsibility and accountability for its potential consequences. The more difficult challenge is making this sentiment work in the real world.

Regulating AGI

There has been a longstanding debate around regulating AI.[8] The concept of regulation can be misconstrued if it is not seen in the right context. In an ideal world, AI should be democratised, ensuring its benefits and opportunities are available to all while safeguarding society from its adverse effects. If this desirable objective is derived from the Mahabharata example quoted above, it suggests that AGI’s use and control should be guided by the principle of dharma (righteousness) for the greater benefit of society (to help protect and improve prosperity). Its potential for abuse is minimised (building failsafe mechanisms through processes and accountability). How does this translate to the nation’s vision for AI, and why is India suitably placed to be part of this endeavour?

Prime Minister Narendra Modi inaugurated the AI Impact Summit on 19 February 2026. He outlined India’s M.A.N.A.V. vision for AI. He explained it as:

Moral and Ethical Systems: AI must be based on ethical guidelines.

Accountable Governance: Transparent rules and robust oversight.

National Sovereignty: Data belongs to its rightful owner.

Accessible and Inclusive: AI must not be a monopoly, but a multiplier.

Valid and Legitimate: AI must be lawful and verifiable.[9]

If this vision is realised, it suggests the way ahead. This includes possible safeguards for the development of AGI. The vision reinforces righteousness as its core ethical foundation, accountability and legitimacy as the basis for creating guardrails, sovereignty, irrespective of the dispersion of capabilities across the international arena, and, finally, its development for universal good. This is easier to articulate than to achieve in an environment of acute competitiveness.

One way to achieve these objectives is to build international consensus around this sentiment and decentralise the pursuit of this vision through an international organisation on the lines of the Financial Action Task Force (FATF).[10] The FATF and its affiliate bodies are a good example of how nations can come together to address a common cause, such as terrorist financing and money laundering. It is an inter-governmental body that lays down guidelines and conducts visits to evaluate the robustness of countries’ existing systems and their implementation.

AI and its progression into AGI are far more decentralised for a single body to regulate. However, an empowered task force working to control the adverse effects of AI and AGI can provide the platform to pursue considered initiatives. It can also become the repository for inputs from domain experts who have been contributing individually, cautioning against the fallout of AGI, especially if it goes rogue.[11] This can be similar to FATF-like regional bodies and affiliated organisations, such as the Egmont Group, which coordinates the Financial Intelligence Units.

India is ideally placed to take the lead for this initiative. Besides being the most populous country and therefore most likely to be affected by the outcome of the AI revolution, India is not a part of the great power rivalry. India’s diversity can also serve as an ideal test bed for implementing AI’s benefits. The country is widely accepted as a representative of the Global South, making its voice both representative and responsible to the wider cause of humanity. At the same time, India has the intellectual heft and knowledge base to lead the effort, despite its obvious complexity.

Conclusion

The world is poised at a critical juncture. A winner-takes-all approach can only lead to a Mahabharata that heralded the onset of Kali Yuga. It is high time to build consensus on the evolution of AI and AGI. This is in the interest of all nations, regardless of their level of AI advancement. It is also time to put in place the necessary international structures. The contours of such an organisation can work to mitigate a battlefield devoid of victors. This is a war with ourselves that will only create losers unless the process is guided to reap AI’s immense benefits. And benefits there are in equal measure.

Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.

  • About the author: Col Vivek Chadha (Retd), served in the Indian Army for 22 years prior to taking premature retirement to pursue research. He joined the Manohar Parrikar Institute for Defence Studies and Analyses in November 2011 and is a Senior Fellow at the Military Affairs Centre.
  • Source: This article was published by the Manohar Parrikar Institute for Defence Studies and Analyses

[1] “World’s Best Go Player Flummoxed by Google’s ‘Godlike’ AlphaGo AI”, The Guardian, 23 May 2017.

[2] They Cracked the Code for Proteins’ Amazing Structures”, The Nobel Prize, 9 October 2024.

[3] Uncontained AGI Would Replace Humanity”, AI Frontiers, 19 August 2025.

[4] “Google to Employees: You can go for our Voluntary Exit Plan, if your are not enjoying…”, The Times of India, 11 February 2026.

[5] Lance Eliot, Forewarning That There’s No Reversibility Once We Reach AGI and AI SuperintelligenceForbes, 2 July 2025.

[6] Anthony Aguirre, Uncontained AGI Would Replace HumanityAI Frontiers, 18 August 2025.

[7] Statement on AI Risk, Centre for AI Safety.

[8] Dasharathraj K. Shetty et al., Analyzing AI Regulation Through Literature and Current Trends, Journal of Open Innovation: Technology, Market, and Complexity, Vol. 11, No. 1, March 2025.

[9] PM Inaugurates India AI Impact Summit 2026PMINDIA, 19 February 2026.

[10] “Who We Are”, FATF.

[11] For a more detailed perspective on AI control, see Mustafa Suleyman and Michael Bhaskar, The Coming Wave, The Bodley Head, London, pp. 225–280.

Ria.city






Read also

Bucks’ surge continues in win vs. overmatched Pelicans

Press Conference on National Conference “The Social Extension of Mysticism in the Civilizational Sphere” Held at ABNA

Washington County Sheriff's Office launches Drone as First Responder program

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости