The rapid rise of AI warfare is forcing the military to rethink one of its core assumptions: that humans remain in control of machines on the battlefield. Experts now warn that this belief may be more fragile than it seems.
No longer limited to just analyzing intelligence, artificial intelligence technologies are now integrated into weapon systems that determine their own targets and provide defense against missiles as well as guidance for drones.
This growing capability raises the question not only of whether artificial intelligence will be used on battlefields, but rather to what extent humans maintain control.
The main principle underlying the current policy is to keep “humans in the loop”. Guidelines state that this practice makes humans responsible and reduces hazards.
However, according to some scholars, this approach does not provide the necessary assurance, as modern advanced artificial intelligence algorithms remain black boxes, meaning that even developers do not understand what calculations lie behind the decision-making process. Human acceptance can be granted without complete understanding.
Danger of black box AI system
According to experts, the difference in intentions stems from a lack of transparency.
This means that AI does not just follow instructions but interpret them.
For example, in case of war, AI may choose a target for the purpose of mission accomplishment, without taking into account ethical boundaries.
The danger lies not in the fact that AI acts independently, but rather in the fact that humans are unable to foresee its intentions before allowing it to happen. Furthermore, the use of autonomous systems also arises due to competition.
If one party uses fast, machine-based decision-making, other parties may be forced to act in the same manner.
