Ukraine's Prime Minister for Innovation, Education, Development, Science, and Technology, Mykhailo Fedorov, underlines the need for AI regulations to advance national defense responsibly.
The government seeks to implement a framework enabling the tracking of military assets and the deployment of countermeasures.
A forthcoming white paper will guide businesses on the approach, timing, and phases of regulatory implementation.
Companies will be encouraged to adopt voluntary codes of conduct.
Fedorov clarifies the government's stance:
"We do not seek to regulate the AI market but rather to strike a balance between business interests and ensuring adequate protection of citizens from AI-related risks. Before introducing legally binding regulations, we consider global realities."
Contributions from businesses, scientists, and educators within the AI Expert Committee at the Ministry of Digital Transformation have shaped this roadmap.
The government aims to finalise the draft once the European Union passes its AI Act, which is still in development.
EU and NATO Aspirations
While Ukraine is not yet a member of the European Union (EU) or NATO, the country aspires to join both.
However, the European Commission emphasises the need for Ukraine to reform its media, judiciary, and anti-corruption laws before considering EU membership.
In the interim, Ukraine's digital transformation ministry has leveraged technology to counter Russian troops.
Non-fungible tokens (NFTs) have been sold to raise funds, and crypto wallet addresses have been published for donations.
Weaponised Deepfakes
Ukraine's pursuit of AI regulations occurs in the context of growing concerns about AI "deepfakes" being used as military and financial weapons.
A foreign policy paper from the Brookings Institute highlighted the challenges faced by democratic leaders in regulating deepfakes.
In March of the previous year, the Ukrainian government distanced itself from a video that seemingly showed President Volodymyr Zelenskyy urging Ukrainians to disarm.
This video was proven to be a deepfake, generated by AI deep-learning algorithms trained on existing data.
Draft EU legislation stipulates that companies like OpenAI must disclose AI-generated content, along with summaries of the sources they use to combat disinformation.
Ongoing Conflict
The use of deepfakes has immediate implications, particularly in the recent conflict involving Hamas and Israel.
These AI tools can also produce fabricated videos of bombings, leading to public skepticism about accurate information during the Israel-Palestine conflict.