Madhu Gottumukkala, the Indian-origin performing head of US federal cybersecurity, has come below scrutiny after stories that he uploaded inner authorities paperwork right into a public model of ChatGPT. A Politico investigation stated the uploads occurred in the course of the summer season of 2025 and concerned recordsdata marked “For Official Use Solely”, triggering automated safety alerts and an inner assessment by the Division of Homeland Safety. Officers have emphasised that no categorized data was concerned, however the episode has drawn consideration as a result of it issues the chief of the company accountable for warning others about AI-related knowledge dangers.Officers accustomed to the matter stated the incident doesn’t contain espionage or intentional wrongdoing. Gottumukkala reportedly had restricted authorisation to experiment with AI instruments on the time, although to not add inner paperwork to public platforms. DHS has characterised the matter as a coverage and judgement lapse reasonably than a safety breach, and there was no allegation of malicious intent. Gottumukkala has cooperated with the interior assessment and has not publicly disputed the reporting.
Who’s Madhu Gottumukkala
Gottumukkala serves as Appearing Director and Deputy Director of the Cybersecurity and Infrastructure Safety Company, generally known as the US cyber company. CISA is accountable for defending federal networks and important infrastructure from cyber and bodily threats. He assumed the performing management position in Might 2025 following a sequence of senior departures, inserting him on the centre of US cybersecurity operations throughout a interval of heightened give attention to synthetic intelligence, infrastructure resilience and election safety.Born in Andhra Pradesh, India, Gottumukkala has spent greater than twenty years working throughout the personal and public sectors. His tutorial background contains engineering, laptop science, know-how administration and a PhD in data techniques. Earlier than becoming a member of CISA management, he served as Chief Info Officer for the state of South Dakota, overseeing statewide IT and cybersecurity techniques, and held senior know-how roles in healthcare and telecommunications. His profession has centered largely on software program engineering, techniques safety and digital infrastructure.
What the ChatGPT incident concerned
Based on Politico’s reporting, Gottumukkala uploaded contracting-related paperwork labelled “For Official Use Solely” right into a publicly accessible AI platform whereas experimenting with generative AI. The motion triggered inner alerts throughout the Division of Homeland Safety, which oversees CISA, prompting a proper assessment. The paperwork had been described as delicate however unclassified, and there was no indication they had been accessed by unauthorised events or disseminated past the AI system itself.The episode has attracted consideration as a result of CISA routinely cautions different federal companies and personal corporations in opposition to coming into delicate data into public AI instruments. A 2023 report by the Authorities Accountability Workplace discovered that roughly 70 p.c of US federal companies lacked ample controls to mitigate AI-related knowledge leakage dangers, underscoring a broader governance hole as generative AI adoption accelerates throughout authorities.
Public response and wider debate
Public response has been divided. Some criticism has centered on management judgement and the necessity for clearer AI guidelines inside cybersecurity companies. Different responses, nevertheless, veered into xenophobic assaults referencing Gottumukkala’s Indian origin and immigration background. Analysts word there isn’t a proof linking the incident to nationality or visa standing, and say the controversy displays wider political tensions round immigration, know-how management and belief in authorities establishments.The DHS assessment is anticipated to give attention to compliance with inner coverage reasonably than legal legal responsibility. Extra broadly, the case has renewed requires clearer, enforceable requirements governing how public officers use generative AI instruments. For CISA, the episode highlights the problem of sustaining credibility because the nation’s lead cybersecurity authority whereas adapting to fast-moving applied sciences that introduce new and poorly outlined dangers.














