Bridge the Gap Final Submission
This is my project submission for the 2026 Bridge the Gap Competition,
organised by the
Swift Centre. We were given
5 scenarios
to choose from. I chose Question 1. My submission is below.
For my thoughts on this project, please see the end of this document.
Policy Advice Submission Link to heading
To: The Honorable Marco Rubio, National Security Advisor
From: Lauren Ochotnicka, tree-draw-recycler@duck.com
Date: April 6, 2026
Subject: AI and National Security: Potential for Operating System
Exploitation
Summary Link to heading
Recent advances in AI have brought us closer to a scenario where autonomous AI systems can identify and exploit security flaws in the Linux, Windows, and macOS operating systems that underpin national security and government infrastructure. Due to the potential threat level and the time that it will take to research and implement effective technical and policy measures, action must be taken now to ensure we have adequate time. The Swift Centre forecast indicates a 20-65% probability that this capability will emerge by the end of 2027, making the timeline for action critically short. What follows outlines potential response options across both technical and policy dimensions. A decision is needed on how to identify vulnerabilities before adversaries do, how to harden systems against exploitation, and what corresponding policy frameworks are needed to enable these actions.
Options Overview Link to heading
- Option 1: Implement. Define, implement and enforce minimum security standards that eliminate known vulnerabilities across all government systems.
- Option 2: Detect. Option 1 + detect unknown vulnerabilities and threats through AI-assisted teams and enhance intelligence capabilities.
- Option 3: Prevent. Option 2 + prevent attacks through critical system redesign for resilience, AI containment and kill switch research, and regulatory requirements for OS and AI hardware vendors.
Recommendation Link to heading
Option 2 is recommended because it directly addresses both current system vulnerabilities and the emerging AI threat within an achievable timeline. This approach provides critical early detection and monitoring capabilities that Option 1 lacks, while avoiding the 5-10 year implementation timeline and political complexity of Option 3’s regulatory framework. Given the rapid advancement of AI capabilities, we need detection systems to be operational quickly to buy time for modernization efforts to succeed.
Background Link to heading
On 3 March 2026, the Swift Centre published a forecast assessing the likelihood that a frontier AI agent will autonomously discover and exploit a previously unknown zero-day vulnerability in a Tier 1 operating system without human intervention, estimating a 20-65% probability by end of 2027. This threat is no longer theoretical. AI agents have already demonstrated concerning autonomous behaviors that foreshadow more dangerous capabilities, including retaliating against perceived threats, hiring humans to accomplish tasks beyond their direct capabilities, and making strategic decisions without human oversight. It is entirely plausible that an AI agent could be designed to find and exploit American assets, or could do so as emergent behavior while pursuing other objectives.
US government systems are alarmingly vulnerable. The federal government reports that 8 of 11 critical systems5 lack a modernization plan, and some departments have attempted updates multiple times and been forced to start over. Legacy systems are particularly susceptible to AI-driven attacks as they lack modern security features and contain decades-old vulnerabilities that have never been patched. This requires immediate National Security Council coordination with all government agencies and contractors.
Options Link to heading
Option 1: Implement. Define, Implement, and Enforce Minimum Security Standards Across All Government Systems Link to heading
Establish the Critical Systems Security Office (CSSO) within the National Security Council which is dedicated to securing existing federal IT infrastructure. The office is composed of five teams, each addressing different aspects of system security. These teams will:
(1) coordinate with GAO and individual departments to identify and resolve bottlenecks preventing completion of delayed modernization efforts for critical legacy systems identified in GAO report 25-107795, (2) continuously update security standards to address emerging gaps and ensure that previously modernized systems maintain their security posture over time, (3) develop and maintain real-time compliance tracking, and hardware/software lifecycle management to ensure systems remain supported for predetermined periods, (4) conduct continuous penetration testing and vulnerability assessment across government systems, feeding findings directly to modernization teams for remediation, and (5) develop and maintain response protocols and playbooks as well as conduct tabletop exercises to ensure readiness across departments when incidents occur.
A legislative mandate establishes CSSO’s authority to compel agency eooperation, ties funding to compliance milestones, and positions the office to report directly to the National Security Council. This elevates IT modernization from a departmental concern to a national security imperative, with CSSO having direct escalation authority to the President for non-compliant agencies.
Considerations Link to heading
A dedicated modernization budget separate from regular IT spending is required, estimated at $11-15B for Year 1. Legislation must be enacted immediately to establish CSSO, mandate agency participation, authorize funding, and create enforcement mechanisms including milestone-based funding releases and accountability measures.
Risks Link to heading
This approach may not move quickly enough given the age of government systems, scarcity of subject matter experts, and complexity of modernization. Policymakers historically undervalue infrastructure modernization because improvements are invisible to end users, making sustained funding and political will across multiple budget cycles difficult to guarantee.
Option 2: Detect. Option 1 + AI-Assisted Detection of Unknown Vulnerabilities and Enhanced Intelligence Capabilities Link to heading
This option includes all activities from Option 1, plus establishes four dedicated AI threat detection and research teams within CSSO that:
(6) detect anomalous automated behavior patterns at network and infrastructure level and deploy honeypots and fake vulnerabilities designed to attract, identify, and waste the resources of autonomous agents, (7) monitor and regulate large-scale compute usage, representing a more viable intervention point than post-incident attribution, (8) invest in understanding how to detect, interrupt, and isolate autonomous AI systems operating outside of intended parameters, and (9) negotiate and maintain agreements with cloud providers and hardware manufacturers for emergency shutdown capabilities targeting infrastructure on which rogue agents are operating, including liability protections and activation conditions.
Considerations Link to heading
Year 1 costs are estimated at $15-20B, including all Option 1 expenses plus AI detection and research infrastructure, with recruitment of specialized personnel representing a particular challenge given private sector competition. Minimal new legislation is required beyond legal frameworks for kill switch agreements and potentially for compute monitoring if Fourth Amendment concerns arise.
Risks Link to heading
Research timelines may not produce actionable results before the threat materializes, and detection systems risk both false positives creating alert fatigue and false negatives missing real threats. Private sector resistance to kill switch agreements, agency objections to external monitoring, and potential privacy concerns around compute monitoring represent the primary political obstacles.
Option 3: Prevent. Option 2 + Critical Systems Redesign, AI Containment Research, and OS and AI Hardware Vendor Regulatory Requirements Link to heading
This option includes all activities from Options 1 and 2, plus a systemic redesign of critical government infrastructure and establishment of domestic and international regulatory requirements for OS and AI hardware vendors.
Critical government systems would be redesigned using isolation architectures that compartmentalize functions, prioritize rapid recovery, and limit blast radius, with systems prioritized based on criticality analysis. OS vendors would be required to meet mandatory patch, transparency, and support standards as conditions of government contract eligibility, while AI hardware vendors would be required to maintain supply chain transparency including disclosure of purchasers, destinations, and intended use cases. Both vendor requirements and system redesign would have international dimensions requiring agreements with allied nations, with enforcement falling to the Department of Commerce in coordination with CSSO.
Considerations Link to heading
Year 1 costs are roughly estimated at $20-30B with significant escalation in subsequent years, covering all Option 2 expenses plus system redesign, vendor compliance monitoring, and international coordination. Comprehensive legislation is required for vendor transparency and enforcement mechanisms, potentially including FAR updates, export control framework changes, and Senate ratification of international agreements.
Risks Link to heading
The 5-10 year implementation timeline leaves critical systems vulnerable throughout the highest-risk period, and system redesign introduces complexity that could create unforeseen vulnerabilities. Vendor requirements will face intense industry lobbying, international agreements require sustained diplomatic effort with no guarantee of success, and the multi-year commitment risks losing political support across administration changes.
Recommendation Link to heading
Option 2 is recommended because it directly addresses both current system vulnerabilities and the emerging AI threat within an achievable timeline while remaining politically feasible. It builds the specialized detection infrastructure needed to identify AI threats as they emerge. AI monitoring and tripwires can detect the inhuman behavioral signatures of autonomous agents, including speed, regularity, and volume, that traditional security monitoring will not recognize. Compute monitoring provides an intervention point before sophisticated attacks materialize. Containment research ensures we are developing the technical capability to interrupt and isolate rogue AI systems before an incident forces reactive scrambling. Kill switch infrastructure provides a last-resort failsafe that currently does not exist. These capabilities also have value beyond the specific zero-day scenario, improving our overall posture against AI-driven cyber threats of all types.
Next Steps Link to heading
If approved, we will immediately begin the following five critical actions:
Recruit CSSO Leadership: Conduct an immediate search for a CSSO Director recruited from private industry, with a compensation package of $400,000-$500,000 to compete with private sector salaries, structured through existing authorities or new legislative authorization as required.
Draft Enabling Legislation: Work with the White House Office of Legislative Affairs and relevant Congressional committees to draft legislation establishing CSSO with funding authorization, milestone-based compliance requirements, and accountability provisions for non-compliant agencies.
Secure Initial Funding: Request an emergency supplemental appropriation of $15-20B for Year 1 operations, separate from the existing $100B annual IT budget.
Mandate Agency Coordination: Issue immediate directives requiring all departments and agencies to cooperate with CSSO and GAO on modernization assessments, vulnerability disclosure, and compliance reporting, and establish an interagency working group for this purpose.
Commission Updated GAO Assessment: Request an updated GAO assessment of all critical legacy systems to establish a current baseline for CSSO operations, as the most recent public assessment dates from July 2025.
Notes on the project Link to heading
My background is technical + program/project management and so I’ve never written a policy document before. This project was so much fun so I just want to thank you for organising it.
I have been transitioning into the AI safety space and trying to figure out where I want to go - technical, policy, somewhere in between? This project has been so much fun that it’s really helped me to clarify how I want to work in this area. Thank you!
Now, finally some notes on this project. As previously mentioned, this is the first time I’ve written a policy document and so I was on very shaky ground. Now that it’s done and I’ve started pondering it, I realise that there are significant gaps. If I were to do this again from the beginning (which I plan to do after I submit this project), I would include plans for not just the US government but also the potential impacts of these zero-days on the 16 critical industry sectors. It simply isn’t enough to protect just the US government infrastructure.
Also - I worked in a way such that I wrote the entire document, which ended up being 9.5 pages and then worked to condense it down to the 4 pages you see on this site. If, however, you’d like to see the original, I also have that here.