Experts have cautioned against the deployment of large language models (LLMs) in fully lethal autonomous weapon systems. Bloomberg posted on X, highlighting concerns about the current capabilities of AI technologies in military applications. The warning comes amid growing discussions on the ethical implications and potential risks associated with integrating AI into defense systems. Specialists argue that the technology is not yet reliable enough to be trusted with life-and-death decisions, emphasizing the need for stringent regulations and oversight. The debate continues as governments and organizations explore the balance between technological advancement and ethical responsibility in warfare.