Britain to pursue ethical approach with deployment of AI on the battlefield
Britain is not moving fast enough when it comes to military artificial intelligence, a parliamentary report has warned.
The report says the UK has the potential to develop a first-class defence AI sector - but it is currently underdeveloped and needs significant cultivation by the MOD.
It also highlights the need for improvements in digital infrastructure, data management and the AI skills base to realise its potential.
Speaking to BFBS Forces News, Dr Simona Soare, a defence AI expert at Lancaster University, said: "The defence AI strategy was very, very, clear from the beginning in that the UK would pursue a very principled human-centric and ethical approach to the deployment of AI on the battlefield.
"Moving forward, there are elements which I think need to be strengthened to be able to ensure that your principles in relation to the ethical deployment of AI are quantifiable and measurable."
It leaves the UK at a crossroads with AI-driven defence. Either invest in it, or risk becoming reliant on those who do.
It comes at a time when Google has reversed its pledge not to develop AI for weapons, warning that "free nations" must lead in AI defence capabilities.
Meanwhile, the United States is already testing AI-piloted fighter jets like the X-62A Vista, which has shown advanced dogfighting skills under AI control.
The US Defence Advanced Research Projects Agency (Darpa) released footage of AI algorithms autonomously flying a specially modified F-16 against a human-piloted F-16 and performing weaving manoeuvres as they go through the sky.
Darpa said this represented a "transformational moment in aerospace history".
The US is also developing AI-enabled drone wingmen like the XQ-58A Valkyrie to support piloted aircraft in combat.
China is also investing in AI-assisted decision-making, autonomous naval systems and drone swarms.