{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Human-in-the-Loop or Loophole? Targeting AI and Legal Accountability

There is no doubt that incorporating artificial intelligence (AI) within the targeting cycle has its operational advantages. During a complex urban scenario, the AI-driven decision-support systems (AI-DSS) has the potential to rapidly synthesize incredible volumes of data received from diverse ISR, signals intelligence, and other feeds, at a velocity no human could match. In theory, this innovation would sharpen a commander’s situational awareness, more accurately ascertain military objectives, and model collateral damage with newfound precision. 

The objective is to achieve a “cleaner” battlefield that features faster and more accurate targeting with lower collateral damage to civilians and civilian infrastructure. This integration of AI-driven systems is increasingly viewed as a mechanism to fulfill the core mandates of International Humanitarian Law (IHL). By providing commanders with more granular data and precise modeling, these tools are designed to facilitate the principle of distinction, which requires parties to target only military objectives and combatants. Furthermore, the speed and accuracy of such systems are intended to support the principle of proportionality, assisting decision-makers in ensuring that an attack’s collateral impact does not outweigh its intended military necessity. This promise of a more accurate and automated targeting system is desirable within the operational limits of IHL regarding the principles of distinction (between combatants and civilians) and proportionality (not excessive attack).

Nonetheless, as foresight systems transform from rudimentary advisory mechanisms to sophisticated recommendation systems, the potential legal liability increases the most. Artificial intelligence’s ability to process vast stores of information at high speeds adds to the “black box” problem. The operator of the system may not be able to fully comprehend the reasoning as to why the system has, through an automated process, pinpointed a particular person or object as a target. Consequently, the system must be evaluated against established matters of unquestionable legality. Such a situation produces an accountability void, which current international humanitarian law fails to address through existing standards.

The “HITL” Accountability Gap

Almost all military doctrines which implement AI promise “human-in-the-loop.” This standard suggests that a human being retains final control and thus legally and morally bears the responsibility of the engagement. But what does this mean in practice under existing legal frameworks? 

For example, when an AI-DSS self learns on terabytes of data and suggests some action which ends up being a flagrant violation of International Humanitarian Law (IHL), such as attacking a hospital which the AI was programmed to recognize as a command center, who would be responsible? 

  • Is it the commander who, during a “time-critical” scenario, almost being in a desperate situation, accepted the AI’s recommendation which had almost a hundred percent probability?  
  • Is it the analyst who, as a result of the target being ‘validated,’ does not have access to the AI’s millions of data points, which are used in the final conclusion? 
  • Is it the person when the operational context is stripped away, who is abstract, and writes the algorithm months or years in the future?  

If the human operator’s functions are as simple as “servicing the target” which the machine picked, it is the AI which would be a ‘human-in-the-loop.’ This begins to pose a danger when human beings are left with an almost imaginary option to an answer. As the International Committee of the Red Cross (ICRC) points out, accountability is a mark of some “predictability” on how certain functions of the system will operate.

There is an ongoing concern in regard to the capabilities of advanced artificial intelligence outside the controls of the International Humanitarian Law frameworks. This “accountability by default” creates a legal paradox. By placing the entire burden on the final operator, the law ignores the reality that a human cannot exercise true agency if they lack the time or information to contest a machine’s high-confidence recommendation. It effectively transforms the human “in the loop” into a mere rubber stamp for automated processes, leaving them legally responsible for outcomes they could not realistically predict or prevent. 

A Framework for “Meaningful Human Control”

To address this gap, it is essential that commanders, operators, and their legal counsel move beyond the “in-the-loop” notion to address Meaningful Human Control (MHC) on its practical and definitional terms. This threshold has been elaborated on by numerous states and civil society organizations. MHC, defined by them is not just having a human present; a human must have legal agency to the actions taken by the AI as well. 

In the case of military lawyers and commanders justifying target selections, MHC can be described by three benchmarks.

The Comprehension Test: Can the human operator explain why the AI is recommending a particular target? At this point, an understanding of the system’s code is not important. What is essential is an understanding of the system’s inputs (e.g., “It is tracking movement”), the system’s logic (e.g., “It makes a decision on a flaggable target such as this vehicle”), and, importantly, the system’s shortcomings (e.g., “It cannot tell the difference between a combatant carrying a weapon and a civilian carrying a shovel”). Ultimately, an operator who cannot provide the “why” behind a recommendation is unable to legally or operationally validate that target.

The Time Test: Time does pose some interesting questions when considering the operator’s involvement. As planning cycles compress from hours to mere seconds, the pressure to accept an AI recommendation without scrutiny will intensify. When a person is forced to veto a machine’s decision within such a narrow time limit, they cease to exert true control and instead become a mere procedural cog. Consequently, automated legal reviews must establish realistic time standards to determine whether a human can actually perform a meaningful legal assessment under such high-velocity conditions.

The Legal Agency Test: Is the final decision on targeting made by a human? While the AI may provide the final technical recommendation, it must never be the final legal arbiter of an attack. An operator must retain the ability to disengage from the machine’s logic and evaluate the strike against the three constitutive principles of the use of force, focusing specifically on the requirement for restraint. The operator possesses the ultimate authority to dismiss any recommendation- even one with high statistical confidence- based on their unique understanding of the operational context or “gut feeling.” Ultimately, the AI serves only as a tool for computation, while the human remains the primary legal agent responsible for the decision.

Conclusion: Keeping the Law in the Loop

The reality is that AI targeted systems are not futuristic challenges, but real problems of today. Such systems, if intended, can help armed forces fulfill their IHL obligations as they are capable of processing information at scale.

Accountability, however, should not be outpaced by technology. Plugging the “human-in-the-loop” standard is bordering negligence. There should be a strong, rational, and legally supportable criterion for keeping meaningful human control. This creates a unique perspective for military leaders and jurists who focus not just on whether a human is “in the loop,” but to what extent they made their decision with full comprehension of the issue, in a timely manner, and with legal capacity. If they do not, then clearly, the loop constituted by the law has been closed.

The post Human-in-the-Loop or Loophole? Targeting AI and Legal Accountability appeared first on Small Wars Journal by Arizona State University.

Ria.city






Read also

Netherlands stands with Cyprus, PM says after frigate deployment

PM Shehbaz orders third-party audit of austerity plan

Di Canio: Tudor might ‘pay’ for Vicario-Kinský decision, hints at new Tottenham coach

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости